Skip links
University researchers have developed an approach for making robots more reliable by adapting to varying levels of vulnerabilities. | Source: Adobe Stock

Researchers investigate how to design low-cost, highly reliable robots

Researchers from the University of Rochester, Georgia Tech, and the Shenzen Institute of Artificial Intelligence and Robotics for Society have proposed a new approach for protecting robotics against vulnerabilities while keeping overhead costs low. 

Millions of self-driving cars are projected to be on the road in 2025, and autonomous drones are currently generating billions in annual sales. With all of this happening, safety and reliability are important considerations for consumers, manufacturers, and regulators.

However, systems for protecting autonomous machine hardware and software from malfunctions, attacks, and other failures also increase costs. Those costs arise from performance features, energy consumption, weight, and the use of semiconductor chips.

The researchers said that the existing tradeoff between overhead and protecting against vulnerabilities is due to a “one-size-fits-all” approach to protection. In a paper published in Communications of the ACM, the authors proposed a new approach that adapts to varying levels of vulnerabilities within autonomous systems to make them more reliable and control costs.

Yuhao Zhu, an associate professor in the University of Rochester’s Department of Computer Science, said one example is Tesla’s use of two Full Self-Driving (FSD) Chips in each vehicle. This redundancy provides protection in case the first chip fails but doubles the cost of chips for the car. 

By contrast, Zhu said he and his students have taken a more comprehensive approach to protect against both hardware and software vulnerabilities and more wisely allocate protection.

Researchers create a customized approach to protecting automation

A design landscape of different software and hardware-based protection techniques for resilient autonomous machines. | Source: Communications of the ACM

“The basic idea is that you apply different protection strategies to different parts of the system,” explained Zhu. “You can refine the approach based on the inherent characteristics of the software and hardware. We need to develop different protection strategies for the front end versus the back end of the software stack.”

For example, he said the front end of an autonomous vehicle’s software stack is focused on sensing the environment through devices such as cameras and lidar, while the back end processes that information, plans the route, and sends commands to the actuator.

“You don’t have to spend a lot of the protection budget on the front end because it’s inherently fault-tolerant,” said Zhu. “Meanwhile, the back end has few inherent protection strategies, but it’s critical to secure because it directly interfaces with the mechanical components of the vehicle.”

Zhu said examples of low-cost protection measures on the front end include software-based solutions such as filtering out anomalies in the data. For more heavy-duty protection schemes on the back end, he recommended techniques such as checkpointing to periodically save the state of the entire machine or selectively making duplicates of critical modules on a chip.

Next, Zhu said the researchers hope to overcome vulnerabilities in the most recent autonomous device software stacks, which are more heavily based on neural network artificial intelligence, often from end to end.

“Some of the most recent examples are one single, giant neural network deep learning model that takes sensing inputs, does a bunch of computation that nobody fully understands, and generates commands to the actuator,” Zhu said. “The advantage is that it greatly improves the average performance, but when it fails, you can’t pinpoint the failure to a particular module. It makes the common case better but the worst case worse, which we want to mitigate.”

The research was supported in part by the Semiconductor Research Corp.

Source therobotreport.com

Leave a comment

This website uses cookies to improve your web experience.
Explore
Drag