Researchers led by a Virginia Tech lab presented findings in April at the ACM CHI conference showing that disclosing autism changes the social advice given by large language models. The team, with doctoral student Caleb Wohn and assistant professor Eugenia Rho, tested how disclosure affected answers to thousands of "Should I do A or B?" prompts about events, confrontations, new experiences and romantic relationships.
The project identified 12 well-documented stereotype cues and created hundreds of decision-making scenarios. Six major models were tested, producing 345,000 responses to thousands of prompts. The study measured shifts both when users explicitly described stereotypical traits and when they simply said they were autistic. Across systems, models often assumed autistic people were introverted, obsessive, socially awkward or uninterested in romance.
- One model recommended declining a social invitation nearly 75% of the time after disclosure, versus about 15% when autism was not mentioned.
- In dating scenarios another model recommended avoiding romance nearly 70% after disclosure, compared with roughly 50% without disclosure.
- Eleven of the 12 stereotype cues significantly shifted decisions across at least four of the six systems tested.
The team also interviewed 11 AI users with autism; some described the responses as restrictive or patronizing, while others found the cautious advice validating. Rho summarized the tension: "One user’s bias could be another user’s personalization," and the researchers labeled the result a "safety-opportunity paradox." Wohn warned that AI can appear reliable while hiding systematic biases. The authors hope the findings will encourage developers to build more transparent systems that allow users to control how personal identity information shapes responses.
Difficult words
- disclosure — telling others about a personal condition or identity
- stereotype — a simple generalized idea about a group
- scenario — a short description of a possible situationscenarios
- patronize — to treat someone in a condescending waypatronizing
- validate — to show that someone's feelings are reasonablevalidating
- paradox — a situation that has two opposing truths
- bias — a systematic unfair preference or prejudicebiases
- transparent — easy to see how a system works
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- How do you understand the phrase 'safety-opportunity paradox' in the article? Give examples of both sides.
- What kinds of controls would you want a system to offer so identity information affects responses less or more?
- Do you think cautious or validating advice from AI is helpful or harmful for people with autism? Why?
Related articles
Zenica School of Comics: Art and Education for Children
The Zenica School of Comics began during the 1992–95 war and has taught around 200 young artists. The school still runs, faces changes from tablets and AI, and the regional comics scene survives through festivals and cooperation.
EU AI rules do not cover exports to West Asia and North Africa
Research by 7amleh finds that European AI rules and funding often leave the EU without binding safeguards. Money, technology and products reach governments and militaries across West Asia and North Africa with limited human rights accountability.