MIT has developed a system that can detect tiny changes in the shadows on the ground to determine if a moving object is coming to the corner. An advance that could greatly improve the safety of cars and autonomous robots.
As sophisticated as they may be, the mapping and detection systems of autonomous cars can not free themselves from certain physical limits such as, for example, another car or pedestrian emerging from the corner of a building or between two parked cars. But MIT engineers have found a solution that could radically improve the “vision” of cars , but also autonomous robots.
It is a system for detecting shadows projected on the ground when an object in motion is approaching. Dubbed ShadowCam, this system uses sequences of video images of a camera pointed at a specific area. It detects changes in light intensity from image to image that may indicate that something is moving away or getting closer. The system calculates this information and classifies each image as containing a fixed object or moving the dynamic object.
To adapt the device to autonomous machines, researchers have developed a visual odometry technique that consists of superimposing images to reveal variations. A method used especially in medical imaging to compare and analyze differences on scanners. For vehicles in motion, the system targets a specific area of interest (previously defined), such as the floor at a street corner, and uses visual odometry to superimpose all the images and thus detect a variation as subtle as it is.
Faster than a Lidar
Tested with a stand-alone car and wheelchair, ShadowCam proved to be more than half a second faster than a Lidar, which is the most common type of sensor used by space mapping systems. As noted by the authors of this experiment, this gain of a few fractions of a second can be crucial to avoid a collision and potentially save lives.
But the system is still very limited. Indeed, it has only been tested indoors where the travel speeds have nothing to do with traffic conditions and where the lighting conditions are more stable. The next step will be to improve ShadowCam to work in real-world situations with variable lighting situations and to automate the process of annotating target areas for shadows detection.