A curious case of the self-driving car vs the homing pigeon.
Imagine yourself as a tourist in a foreign city you never visited before. Your tour guide drops you in the city centre and tells you that you have some free time, you can wander around and then come back to the meeting point half an hour later. You leave the tour bus and to your surprise, your smartphone does not work.
If you are a stranger in this strange land without a GPS and with a time constraint, you have to help yourself. You do not have time and information to wait for the cavalry to come to your rescue. You will probably notice some landmarks and eventually, you will create a realtime map as you walk. This is goal-oriented behaviour in unknown territory. Your goal is to explore the area within your time limit and get back to your starting point before it is too late. However, during your walk, eventually, you will end up at a location where you do not know exactly where you are. To find your way back, you may use familiar landmarks as the primary step, after which a direction to a target location is set. Your compass direction may be updated by position fixes periodically, thus enabling correction of the walking course and detours.
In this scenario, the tourist orients according to a navigational principle using only one mental set of coordinates: the tour bus location. This would allow the tourist to use using simpler strategies to find way back to the meeting point, e.g. navigation following various landmarks until he / she reaches the target destination. What if you wanted to drop by a souvenir shop you happened to see before and then continue your walk back to the tour bus? You will need the knowledge of two memorized places in relation to your own position in an unknown area. If the number of target locations increases, the complexity of navigation requires a cognitive mental map where you move from point to point without the assistance of familiar landmarks. Still, despite certain inadequacies, humans are quite good at navigation in an unfamiliar place.
Now, please take a look at this :
A bird is a navigation wonder. I do not say this lightly. Birds navigate over long distances perfectly. A good example is the homing pigeons. They are remarkable navigators. Homing pigeons can memorize different target locations, and to establish a spatial relationship between themselves and their position in unknown territory (source: N. Blaser-2013) They have a wonderful sense of direction and could consistently find their way back to their nest.
The incredible homing pigeon
Homing pigeons are able to return to their home site when passively displaced to unfamiliar areas several hundreds of kilometres away. In one noted experiment, German scientist Hans Wallraff transported homing pigeons to a very distant location. To ensure that the birds did not receive any external navigational information, they were transferred under stringent conditions. After release at the distant and completely unknown area, the birds were able to fly home to their roost, apparently without trouble. Did they use GPS, google maps or advice from more experienced pigeons?
There are several theories to explain homing pigeons' ability to find their target location. All theories are functional explanations of the two abilities the homing pigeon should have to find its way:
1. An internal map
2. A compass
Using a compass alone, a pigeon cannot find out in what direction it has to fly. The crucial piece of information is pigeon's current position relative to its home. It has to localize itself in its internal map and determine the relative position information. Lowly homing pigeon is obviously more successful than humans in detecting environmental parameters that are useful for home-finding over distances sometimes that extend more than 100 km. A pigeon has 310 million neurons. This is its whole brain size. A human has 100 billion neurons. Relatively, humans have immense computing power but they are still out performed in a task like navigation by the lowly homing pigeon.
How does a self-driving car navigate?
If you were a commercial flight pilot, at the start of the flight, you would load a predetermined route into the Flight Management System. In this way, you would impose the route of the flight onto a moving map which the pilots can monitor on their screens throughout the flight.
Very similar to the way planes navigate, a self-driving car uses a map too. The map contains things that don’t change very often. It describes the static environment. Mapping team categorizes different features of the environment such as intersections, driveways, or traffic lights and the map becomes the “world outside” for the car. The remaining challenge is to localize the car within this map.
To determine its location, the self driving car uses a combination of three methods: relative location, absolute location and hybrid location. For relative location, the current position of self-driving car is obtained by adding the moving distance and direction to the prior position. The absolute location is obtained from positioning system. A common positioning system is the satellite-based system, such as GPS. The hybrid location combines the characteristics of the above two locating methods.
At the end of the day, location data and the map should fuse to plan the path. Map matching, which is the foundation of the path planning, calculates out the car’s location by using the information from GPS and the map information. Also, GPS information can be coupled with some other observable landmarks. With a map that holds both the navigational information needed by the path planner, and observable landmarks, the car can use on-board sensors to detect angle and/or range to the landmarks, and triangulate a position relative to the navigational information in the map.
Once the path from the current location to the target location is planned, the car stays in its lane by following a route that has been predetermined very much like a rail system. This allows the AI in the car to concentrate only on the things in its environment that are changing: cars, pedestrians, unexpected obstacles, construction and the like. In this scenario, the car is like a tram with a robot conductor. According to Intel, in terms of compute, approximately 1 GB of data will need to be processed each second in the car’s real-time operating system. This data is to react to changes in its surroundings and to figure out when, how hard, and how fast to brake.
Who is the best navigator?
Take your self-driving car and go to a city where it has never driven before. And wait for the mapping team to arrive if you want to go back to your home. Without a map in place, self-driving car cannot localize itself in the world. With a homing pigeon, you have more chance to find your way back. With just 310 million neurons, using very little power, lowly homing pigeon can outperform the self driving car in a heartbeat.
Finding my way home using the world around me
To navigate, homing pigeons require a ‘map’ (to tell them home is west, for example) and a ‘compass’ (to tell them where west is relative to their current position), with the sun and the Earth’s magnetic field being the preferred compass systems.
One theory suggests that homing pigeons may use an olfactory map. A model of avian goal-oriented navigation developed by Hans Wallraf of the Max Planck Institute described that to orient their courses homeward from distant unfamiliar areas, homing pigeons require wind and olfactory access to the environmental air at home and abroad.
A second theory suggests that birds use the earth’s magnetic field to obtain at least a partial map of its position. The earth’s magnetic field become stronger as you travel away from the equator and toward the poles. In theory, a bird might be able to estimate its latitude based on the strength of the magnetic field.
For self driving cars, there are several approaches to navigation.
LiDAR, for “light detection and ranging” — that bounces anywhere from 16 to 128 laser beams off approaching objects to assess their distance and hard/soft characteristics and generate a point cloud of the environment.
GPS — that locates the car’s location in the physical world within a range of one inch, at least in theory.
IMU, for “inertial measurement unit,” — that tracks a vehicle’s attitude, velocity, and position.
Radar — that detects other objects and vehicles.
Camera — that captures the environment visually. The analysis of everything a camera sees requires a powerful computer, so work is being done to reduce this workload by directing its attention only to the relevant objects in view.
Embodied evolutionary computing for self-driving cars
In a self-driving car, the navigation function is fully attributed to the brain of the car. Navigation is fully computed. On the contrary, homing pigeon outsources computation load to body-environment dynamics. The take-away idea is that the reduction of computation in the brain as a result of an exploitation of the body has immense advantages. Having a purpose-built body coupled with the information in the environment makes the organism robust. As long as the body is sufficiently intact and the information is available, the system performs without a glitch.
In natural systems, there is always co-evolution of morphology and neural substrate which implies that the adaptive potential of both is exploited. Computational load is shared by the body-environment coupling and the neural networks. In our self-driving car example, the morphology is given. The car does not evolve sensors or body parts. What we are left with is finding the weight matrix of a neural network which evaluates the changes in the static environment. Although it is a major challenge, evolving phenotypes such as sensory systems, or motor controllers at the same level of complexity as found in neural networks could be the only solution for the robust navigation for self-driving cars.