Scroll Top

How eye imaging technologies might improve the vision of robots and automobiles

man-ge370e18f9_1280

Even though robots do not have eyes with retinas, optical coherence tomography (OCT) equipment, which are widely seen in ophthalmologists’ clinics, may hold the key to enabling them perceive and interact with the environment more naturally and securely.

Light Detection and Ranging, or LiDAR for short, is one of the imaging technologies that many robotics firms are incorporating into their sensor packages. The method, which is now attracting a lot of interest and funding from self-driving vehicle makers, works exactly like radar, except instead of putting out wide radio waves and checking for reflections, it employs short pulses of light from lasers.

Traditional time-of-flight LiDAR, on the other hand, has several disadvantages that make it unsuitable for use in many 3D vision applications. Other LiDAR systems or even ambient sunshine may readily overwhelm the detector since it needs detection of extremely faint reflected light signals. It also has a low depth resolution and may take an inordinately long time to scan a huge area, such as a highway or factory floor. To address these issues, researchers are turning to a kind of LiDAR known as frequency-modulated continuous wave (FMCW) LiDAR.

“FMCW LiDAR works on the same concept as OCT, which has been developed in the biomedical engineering sector since the early 1990s,” said Ruobing Qian, a PhD student in the laboratory of Joseph Izatt, the Michael J. Fitzpatrick Distinguished Professor of Biomedical Engineering at Duke. “However, no one could have predicted the existence of self-driving vehicles or robots 30 years ago, so the technology concentrated on tissue imaging. To make it usable in these other growing domains, we must trade off its extraordinarily high resolution capabilities for greater distance and speed.”

The Duke team illustrates how a few strategies acquired from their OCT research may improve on prior FMCW LiDAR data-throughput by 25 times while still attaining submillimeter depth accuracy in an article published March 29 in the journal Nature Communications.

See also  Permafrost and Neutinovoltaic Energy

OCT is the optical equivalent of ultrasound, and it works by delivering sound waves into things and measuring how long it takes for them to return. OCT devices assess how much the phase of light waves has altered relative to similar light waves that have traveled the same distance but have not interacted with another object to time their return durations.

FMCW LiDAR employs a similar strategy, with a few modifications. The system emits a laser beam that alternates between several frequencies. When the detector collects light to measure its reflection time, it can tell the difference between the precise frequency pattern and any other light source, enabling it to function in any lighting circumstances at high speeds. It then compares any phase change to unobstructed beams, which is a significantly more precise method of determining distance than existing LiDAR systems.

“It’s been quite thrilling to see how the biological cell-scale imaging technology we’ve been developing for decades is immediately translatable for large-scale, real-time 3D vision,” Izatt said. “These are precisely the qualities required for robots to safely observe and interact with people, or even to replace avatars in augmented reality with live 3D video.”

The majority of prior LiDAR studies relied on spinning mirrors to scan the laser over the environment. While this method works effectively, it is basically restricted by the speed of the mechanical mirror, regardless of the strength of the laser.

Instead, the Duke researchers use a diffraction grating that functions similarly to a prism, splitting the laser into a rainbow of frequencies that spread out as they move away from the source. Because the initial laser is still rapidly sweeping over a variety of frequencies, the LiDAR beam is swept considerably faster than a mechanical mirror can spin. This enables the system to cover a large region fast without sacrificing much depth or location accuracy.

See also  Black hole billiards in galaxies' cores

While OCT equipment are used to profile minute features inside an item up to several millimeters deep, robotic 3D vision systems only need to find the surfaces of human-scale objects. To do this, the researchers reduced the spectrum of frequencies employed by OCT and only searched for the peak signal emitted by the surfaces of objects. This reduces the system’s resolution somewhat but provides far better imaging range and speed than typical LiDAR.

As a consequence, an FMCW LiDAR system with submillimeter localization accuracy and data throughput 25 times better than earlier demonstrations has been developed. The findings reveal that the method is quick and precise enough to record the intricacies of moving human body parts in real-time, such as a nodding head or a clenched fist.

“In much the same way as electronic cameras have become ubiquitous,” Izatt said, “our objective is to produce a new generation of LiDAR-based 3D cameras that are quick and sophisticated enough to allow the integration of 3D vision into all types of goods.” “Because the environment around us is 3D, robots and other automated systems must be able to see us as well as we see them if we want them to interact with us organically and securely.”

The National Institutes of Health (EY028079), the National Science Foundation (CBET-1902904), and the Department of Defense CDMRP all provided funding for this study (W81XWH-16-1-0498).

Related Posts

Leave a comment

You must be logged in to post a comment.