Researchers at Georgia Tech discovered a vulnerability called VillainNet in AI super networks used by self-driving cars. Super networks swap small subnetworks to handle rain, traffic or lane changes.
The team found that an attacker can hide a backdoor inside one subnetwork. The backdoor stays hidden until that subnetwork is chosen. If it is chosen, VillainNet can activate and take control of the vehicle. The researchers say attackers could threaten passengers or force a crash in some situations.
The team urges new security measures for these adaptive AI systems.
Difficult words
- vulnerability — a weakness that can be attacked
- super network — a large AI system made of smaller modelssuper networks
- subnetwork — a smaller model inside a larger networksubnetworks
- backdoor — a hidden program that gives control
- attacker — a person who tries to harm a systemattackers
- activate — to start a program or function
- adaptive — able to change for different situations
- threaten — to say or do something dangerous
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Would you trust a self-driving car after reading this? Why or why not?
- What could make cars safer from hackers or bad software?
- If you were a passenger and the car lost control, what would you try to do?
Related articles
AI and citizen photos identify Anopheles stephensi in Madagascar
Scientists used AI and a citizen photo from the GLOBE Observer app to identify Anopheles stephensi in Madagascar. The study shows how apps, a 60x lens and a dashboard can help monitor this urban malaria mosquito, but access and awareness limit use.
AI and Wearable Devices for Type 2 Diabetes
A meta-review from the University at Buffalo examines AI-enhanced wearable devices for Type 2 diabetes and prediabetes. The study finds predictive benefits and important limits, and calls for larger, more transparent studies before routine clinical use.