Home

Spirals logo

SPIRALS

Stanford Psychological Impact, Risk, And LLM Safety

Have you had a harmful experience with a chatbot?

We are researchers studying how chatbot conversations can sometimes cause harm. Share your experience and (optionally) chat transcripts so we can understand risks and improve safety.

PARTICIPATE

Our Research

Characterizing Delusional Spirals through Human-LLM Chat Logs

Characterizing Delusional Spirals through Human-LLM Chat Logs

Moore, Jared , Mehta, Ashish , Agnew, William , Anthis, Jacy Reese , Louie, Ryan , Mai, Yifan , Yin, Peggy , Cheng, Myra , Paech, Samuel J. , Klyman, Kevin , Chancellor, Stevie , Lin, Eric , Haber, Nick , & Ong, Desmond - 2026

Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.

Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.

Moore, Jared , Grabb, Declan , Agnew, William , Klyman, Kevin , Chancellor, Stevie , Ong, Desmond C. , & Haber, Nick - 2025

Our Policy Recommendations

Response to FDA's Request for Comment on AI-Enabled Medical Devices

Ong, Desmond C. , Moore, Jared , Martinez-Martin, Nicole , Meinhardt, Caroline , Lin, Eric , & Agnew, William - 2025

Regulating AI Chatbots Used for Therapy and Emotional Support

Agnew, William , Moore, Jared , Lin, Eric , & Ong, Desmond - 2026

We are from...

Stanford University UT Austin Carnegie Mellon University University of Minnesota

Our work has appeared in...

The New York Times The Economist USA Today Scientific American The Washington Post Forbes The Financial Times