Researchers at the University of Utah developed a framework to clarify how conversational artificial intelligence, especially large language models, might automate therapy tasks. The framework, posted ahead of publication in Current Directions in Psychological Science, was led by Zac Imel and developed with Vivek Srikumar, Brent Kious and other collaborators. It is intended to separate practical questions about use, risk and responsibility from the larger question of whether machines will replace therapists.
The team describes four categories of automation along a continuum:
- Category A: scripted systems with prewritten content and decision trees;
- Category B: AI that evaluates therapists by reviewing sessions and giving feedback or ratings;
- Category C: AI that assists therapists by suggesting interventions, prompts or phrasing while a human delivers care;
- Category D: AI that provides therapy directly as an autonomous agent, possibly under supervision.
The researchers assessed usefulness and risk and noted that simple note‑taking or coaching tools have different risk profiles from fully autonomous AI therapists. They warn that LLMs used directly for counseling can fabricate information, encode biases and act unpredictably, and they do not always follow evidence‑based psychotherapy techniques. To improve practice, the team is partnering with SafeUT, Utah’s statewide text‑based crisis line, to build tools that evaluate crisis counselors’ sessions and deliver feedback to maintain skills. Imel says trained LLMs can capture key treatment components quickly and give timely feedback, which current methods rarely do at scale. The paper suggests beginning with lighter, lower‑risk tools while studying benefits and harms. Additional coauthors are from the University of Washington, University of Pennsylvania and the Alan Turing Institute; Zac Imel is a cofounder of Lyssn.
Difficult words
- framework — set of ideas to organise work
- automate — make a process run by machines
- continuum — a range with gradual differences across examples
- autonomous — able to act independently without human control
- fabricate — create false or invented information deliberately
- bias — systematic unfair preference or prejudicebiases
- assess — judge the quality or value of somethingassessed
- intervention — action taken to improve a situationinterventions
Tip: hover, focus or tap highlighted words in the article to see quick definitions while you read or listen.
Discussion questions
- What benefits and risks do you think lighter, lower-risk AI tools could bring to therapy practice? Give reasons.
- How might evaluating counselors' sessions with AI help maintain or improve skills? What challenges could arise?
- If AI begins to assist more in therapy, how could therapists' roles and training change in your view?
Related articles
Smart textiles could monitor and protect health
Researchers reviewed studies on MXenes, microscopic metal-based materials that can give fabrics new functions. MXene-based smart textiles can measure vitals, show antimicrobial behaviour and harvest solar energy, but they face limits like oxidation and sustainability.
Ancestral healing in the Caribbean
Ancestral healing asks societies to face historical wounds so people can live healthier lives. In the Caribbean, educators combine shamanic practices, nervous-system work and cultural rituals with scientific findings about trauma and community care.
AI audio summaries of research can help — and err
Researchers tested Google’s NotebookLM, which turns research papers into podcast-style audio. The summaries were engaging and clearer for teaching, but every audio overview contained mistakes, so the authors advise reading the original papers to check claims.