Researchers at Georgia Tech discovered a vulnerability called VillainNet in AI super networks used by self-driving cars. Super networks swap small subnetworks to handle rain, traffic or lane changes.
The team found that an attacker can hide a backdoor inside one subnetwork. The backdoor stays hidden until that subnetwork is chosen. If it is chosen, VillainNet can activate and take control of the vehicle. The researchers say attackers could threaten passengers or force a crash in some situations.
The team urges new security measures for these adaptive AI systems.
Difficult words
- vulnerability — a weakness that can be attacked
- super network — a large AI system made of smaller modelssuper networks
- subnetwork — a smaller model inside a larger networksubnetworks
- backdoor — a hidden program that gives control
- attacker — a person who tries to harm a systemattackers
- activate — to start a program or function
- adaptive — able to change for different situations
- threaten — to say or do something dangerous
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Would you trust a self-driving car after reading this? Why or why not?
- What could make cars safer from hackers or bad software?
- If you were a passenger and the car lost control, what would you try to do?
Related articles
Digital harassment of women journalists in Indonesia
Online attacks against female journalists and activists in Indonesia have become more visible in the last five years. Victims report doxing, edited photos, DDoS and other abuse, while legal protection and platform responses remain limited.
Citizen archivists record South Asian oral traditions
Citizen archivists in South Asia record folk songs, oral histories, riddles and traditional medicinal knowledge. They upload videos and transcriptions to Wikimedia Commons, Wikisource and Wikipedia to preserve fading cultural knowledge.