A team at the University of Utah created a framework to assess how conversational artificial intelligence, including large language models, could automate parts of therapy. The framework describes four categories: scripted systems, AI that evaluates therapists, AI that assists therapists, and AI that provides therapy directly. Each category shows a different level of automation.
The researchers evaluated usefulness and risk and noted that users and health systems may not always know which level they use. The team works with a statewide crisis text line to build tools that review sessions and give feedback. They advise starting with lower-risk tools while studying possible benefits and harms.
Difficult words
- framework — A basic structure to organize ideas or work
- assess — To judge how good or useful something is
- automate — To make a task happen by machine or software
- evaluate — To examine something and decide its qualityevaluates, evaluated
- therapy — Treatment to help a person's mental or physical health
- risk — Possibility that something bad could happen
- feedback — Information given to improve work or behavior
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- Do you think AI should give therapy directly? Why or why not?
- Would you feel comfortable if AI gave feedback to a therapist? Why?
- Do you agree with starting with lower-risk tools first? Why or why not?
Related articles
LLMs change judgments when told who wrote a text
Researchers at the University of Zurich found that large language models change their evaluations of identical texts when given an author identity. The study tested four models and warns about hidden biases and the need for governance.