US Psychologists Calls for Guardrails for Teens Using AI
The American Psychological Association has issued a list of considerations to guide adolescent use of AI.

Complete the form below to unlock access to ALL audio articles.
The use of artificial intelligence (AI) is rapidly increasing, especially among adolescents aged 10 to 25. AI offers new possibilities for efficiency and engagement, but its widespread integration into daily life calls for careful evaluation to ensure safety and positive outcomes for young people. AI technologies range from subtle functions such as predictive text and shopping suggestions to more significant roles including chatbots, automated application reviews, and decision-making tools. “Generative AI” refers to applications capable of producing humanlike text, photorealistic images, lifelike audio, and realistic videos, all of which can influence adolescents’ perceptions and behavior.
“Interactive AI” includes platforms that enable real-time conversations, personalized learning, and tailored content recommendations. Both types can affect adolescent development, social interactions, and understanding of the world. AI is also increasingly used to automate decisions that affect youth, such as school admissions, medical diagnoses, and grading.
Protecting adolescent well-being requires coordinated efforts from parents, caregivers, educators, policymakers, technology developers, adolescents themselves, and platform providers. This advisory presents recommendations for immediate and longer-term actions appropriate to various stakeholders.
Background and considerations
This advisory builds on prior APA work on social media and adolescent psychological development. While AI use among youth is growing quickly, research on its impacts remains limited and complex. Important considerations include:
- AI effects on adolescent development are nuanced, depending on the application, design, training data, and context.
- Adolescence spans a wide developmental range, with age not reliably indicating maturity or psychological competence.
- The adolescent brain undergoes critical developmental changes second only to infancy, necessitating heightened safeguards.
- Individual differences in temperament, neurodiversity, stress exposure, social environment, mental health, and socioeconomic factors influence responses to AI content.
- AI outputs often contain biases reflecting unrepresentative training data and limited diversity in development teams.
- Adult use of AI influences adolescents’ attitudes and behaviors, highlighting the need for adults to model critical thinking and healthy use.
- Early attention to youth safety in AI design is crucial to avoid repeating mistakes made with social media, especially since adolescents may be unaware of AI’s presence and AI-generated misinformation can be particularly convincing.
Recommendations
Set healthy boundaries with AI-simulated relationships
AI systems that mimic human companionship or expertise, such as chatbots for social or mental health support, must include safeguards to prevent harm. Adolescents may trust AI characters more than adults and struggle to differentiate simulated empathy from genuine human understanding. These AI relationships risk displacing real-world social connections and fostering unhealthy dependencies. Developers should include clear notifications of AI interaction and promote human contact, especially when youth express distress. Regulatory oversight should ensure mental health protection, and parents and educators should teach youth about AI’s limits and potential manipulative intent.Design AI systems specifically for adolescents
AI accessible to youth should reflect their developmental needs, with features like protective default settings for privacy and content, transparent explanations understandable to young users, minimized persuasive design elements, and easy access to human support when needed. Rigorous, ongoing testing with diverse adolescent groups is necessary, ideally involving advisory boards including scientists, youth, ethicists, and health professionals focused on adolescent protection.Support AI uses that promote healthy development
AI can enhance learning by assisting with brainstorming, organizing, summarizing, and personalized feedback, supporting cognitive growth and critical thinking. Teachers need skills to use AI appropriately without undermining students’ own learning processes. Students should be aware of AI’s limitations, actively challenge AI-generated content, and use it as a supplement rather than a replacement for traditional learning strategies.Limit youth exposure to harmful and inaccurate content
Exposure to violent, graphic, or misleading material can increase mental health risks and normalize harmful attitudes. AI developers must implement strong protections against such content, offer user-controlled content filtering, and collaborate with mental health experts to ensure effective safeguards. Educational resources should help adolescents and caregivers identify and avoid harmful material.Ensure accuracy of health information
Accurate health content is critical for adolescents, who frequently seek online health information. AI systems providing health advice should verify information quality and include clear disclaimers warning users that AI content is not a substitute for professional advice. Platforms should encourage consultation with qualified humans and remind users to verify information through trusted sources. Parents and educators must reinforce awareness of AI’s potential inaccuracies.Protect adolescent data privacy
AI systems must prioritize adolescents’ privacy over commercial interests by maximizing transparency and user control, limiting data use for targeted advertising, and preventing data misuse. Clear communication about data practices and informed consent from youth and caregivers are essential. Sensitive data, including biometric or neural information, requires robust safeguards.Prevent misuse of youth likenesses
The unauthorized use of adolescents’ images, voices, or likenesses can lead to harmful content such as deepfakes and cyberbullying, with severe psychological effects. Platforms must enforce strict restrictions on such uses, including monitoring and compliance mechanisms. Parents, caregivers, and educators should educate youth on online image safety and establish policies addressing harmful AI-generated content in schools.Empower parents and caregivers
Parents play a key role in guiding adolescents’ AI use but may lack knowledge or resources. Stakeholders should develop accessible, user-friendly materials that explain AI’s risks and benefits, data practices, and manipulative design elements. Tools analogous to movie or game ratings should be created for AI technologies, including customizable controls and interactive tutorials that are regularly updated to keep pace with AI advances.Implement comprehensive AI literacy education
AI literacy is essential for youth and those supporting them to understand AI’s mechanisms, benefits, limitations, privacy concerns, and biases. Education should cover algorithmic bias, critical evaluation of AI outputs, and ethical considerations. Schools should integrate AI topics across curricula, provide teacher training, and facilitate ethical discussions. Policymakers must establish guidelines, fund programs, and promote awareness campaigns. Developers should create transparent explanations, educational tools, and bias mitigation features while offering simple reporting mechanisms.Fund rigorous research on AI and adolescent development
Long-term, interdisciplinary research is needed to clarify AI’s effects on youth. This includes longitudinal studies, causal research designs, inclusion of diverse and vulnerable populations, and open data access for independent analysis. Collaboration across psychology, neuroscience, computer science, ethics, education, and public health is vital to understand and address AI’s complex impacts.
This content includes text that has been generated with the assistance of AI. Technology Networks' AI policy can be found here.