A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

【Featured Article】“Robotic white cane” uses computer vision to aid blind and visually impaired

  • Date:
  • 2021-09-06
  • Visited:
  • 317
  • Share:

Past efforts at technology-enhanced white canes have suffered from substantial error accumulation over long distances and poor interfaces that often render them less useful than the analog tool they are trying to replace. A novel approach using computer vision and a vibrating interface aims to change this.

Technologies from GPS to artificial intelligence are transforming transportation for sighted travellers, but for those who are blind or visually impaired, very little has changed with respect to navigation technology. The white cane and guide dog, which have been around for a very long time, remain their main mobility tools. Researchers at Virginia Commonwealth University wanted to address this technology disparity and develop a “robotic white cane”. 

Paper Information

He Zhang, Lingqiu Jin and Cang Ye, "An RGB-D Camera Based Visual Positioning System for Assistive Navigation by a Robotic Navigation Aid," IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1389-1400, Aug. 2021. 

https://www.ieee-jas.net/en/article/doi/10.1109/JAS.2021.1004084


In the past, there have been efforts at developing robotic navigation aids, or RNAs, that use a simultaneous localization and mapping (SLAM) technique. This involves the use of cameras to estimate the position (or, more formally, the “pose”) of the RNA with respect to its environs and to detect nearby obstacles from a 3D point cloud map that has previously been generated. A point cloud is a set of millions of data points of Cartesian coordinates (up-down, right-left, front-back) representing the 3D shape of objects, typically produced by 3D scanners.

However, over time, error in pose estimation can accrue over time. This is not much of a problem for short journeys, but for long ones, error accrual is sufficient to break down the navigation function of an RNA. To compensate for this, a range of solutions has been proposed, from the installation of Bluetooth beacons to radio-frequency identification chips (RFID), and constructing a visual map ahead of time. But building such a map is very time-consuming, and placing beacons or chips in the environment is impractical for all but a tiny handful of applications, rendering such ‘improvements’ actually inferior to a cane.

To overcome these limitations, the team developed a computer vision technique that uses an RGB-D camera, a gyroscope, and an accelerometer to measure how the RNA moves and rotates in space. This camera produces both a color image and depth data for each image pixel. The system combines depth data for visual features in the environment with the plane of the floor or ground. As the latter is always observable throughout the entire process as a reference against which to compare all other data, pose error is significantly reduced.

A statistical method is then deployed to reduce error still further. An initial estimated pose is used as a ‘seed’ for the statistical generation of numerous other probable poses surrounding the estimated one. The system then computes what the onboard sensor should be measuring based on the floor plan map at each of its poses and compares this to the actual sensor measurement.

“In essence, the method uses geometric features such as door frames, hallways, junctions, etc., from the 2D floor plan map to reduce pose estimation errors,” says professor Cang Ye, an engineer specializing in computer vision and the corresponding author for the paper. Ye is currently a professor with Department of Computer Science of Virginia Commonwealth University, USA.

Other efforts at technological improvements in conventional white canes have also stumbled over the effectiveness of their human-machine interface. Existing devices tend to use a speech interface delivering turn-by-turn navigational commands.

“These really are not very helpful to the visually impaired,” Ye adds. “They’re humans walking along at a natural pace, not drivers of cars waiting at a stop light.”

So the team abandoned that approach entirely and instead designed a novel ‘robotic roller tip’ interface at the end of an otherwise conventional white cane. This consists of a rolling tip like the end of a ballpoint pen, an electromagnetic clutch, and a motor. By engaging the clutch, the user puts the cane in robotic mode. Switched on, the motor then rotates the rolling tip to steer the cane in the desired direction calculated by the onboard computer vision system. A vibrator in the cane subtly suggests to the user the desired direction via a coded vibration pattern.

If the user attempts to swing the device or lift it off the ground, this is instantly perceived by the onboard sensors and the device automatically switches into plain white-cane mode. It stays in this mode until it returns to the ground, giving the whole system an automatic mode-switching functionality that removes the need for the user to consciously switch back and forth.

Having developed the prototype, the team are now working to reduce its weight and cost.

  • Share:
Release Date: 2021-09-06