In April a research team presented a study at a major conference. The work came from a lab at Virginia Tech and looked at how AI answers change when people say they are autistic.
The researchers tested six major language models, including well-known systems such as GPT-4 and others. They made hundreds of everyday social scenarios and asked the models what people should do in those situations.
When autism was disclosed, models often gave more cautious or restrictive advice. Some users felt shocked by those answers, while others found them validating. The team says developers should make systems more transparent and give users more control.
Difficult words
- research — careful study to learn new facts
- present — show or explain work to other peoplepresented
- autism — developmental condition that affects social behavior
- language model — computer system that writes or answers textlanguage models
- disclose — tell others personal information or factsdisclosed
- cautious — careful and avoiding possible risk or harm
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Do you think users should have more control over AI systems? Why or why not?
- How would you feel if an AI changed its answer when you told it you were autistic?
- What can developers do to make AI systems more transparent for users?
Related articles
Why Rechargeable Batteries Lose Performance
Researchers found that repeated charging and discharging makes batteries expand and contract, causing tiny shape changes and stress. This “chemomechanical degradation” and spreading strain reduce performance and shorten battery life, and imaging revealed how it happens.