With a “nanosecond” camera, drones will analyze events faster than they occur
Advanced autopilot systems have learned to respond adequately to dangerous situations, but they are still failing “eyes.” The bottleneck is the pace of data processing from optical surveillance cameras. At a recording speed of 120 frames per second, the appearance of any random light source in the field of view leads to the generation of unnecessary gigabytes of information. It is necessary to spend computing power, in order to simply weed out them.
Professor Chen Shoushun from Nanyang University of Technology in Singapore proposes to change everything – and cameras, and image processing algorithms, and management methods. Instead of collecting complete information about the event and subsequent pattern recognition, its system responds to point changes in the reception of light by individual pixels of the matrix. The interval between signal verification is about nanoseconds, which provides instant analysis of the scene.
By “instantaneous” it is understood here that the system manages to react and send a command to the control module faster than the objects will be displaced in real space. For example, a car on the oncoming lane just started a dangerous maneuver, and the drone has already concluded that this speck of light is dangerous and must be taken to the side. And to identify the object as a truck, to mount and write in memory the full video can be already then.
For a camera called Celex, a special data format with compact size files was developed. The current prototype, the camera and the situation analysis module are shown at the EI 2017 Symposium. They are interested in the Hillhouse Tech business structure and if things go smoothly, a commercial version of the ultra-fast camera for drones may appear by the end of this year.