We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Toddlers Still Learn Language Faster Than AI

A child holds a fork over a plate of salad. Orange juice visible.
Credit: iStock
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

Despite the processing power of artificial intelligence systems, they remain far behind young children when it comes to acquiring language. A study published in Trends in Cognitive Sciences introduces a new theoretical model that explains why children consistently outperform machines in learning language.


The work was led by Caroline Rowland at the Max Planck Institute for Psycholinguistics in collaboration with researchers from the ESRC LuCiD Centre in the United Kingdom. Drawing on insights from computational science, linguistics, psychology and neuroscience, the framework highlights how children use social and sensory cues to build language more effectively than text-trained AI systems.

Beyond data volume: how children learn differently

Children acquire language not just from large volumes of data but from dynamic, multisensory environments. Unlike large language models such as ChatGPT, which rely primarily on static text data, children learn through active engagement with the world.


This embodied process involves integrating signals from sight, sound, touch and movement, and unfolds alongside the development of cognitive, motor and social skills. Language emerges as part of a broader system of understanding the world, rather than as a standalone function.


The framework emphasizes that children’s ability to initiate interactions—such as pointing, crawling or vocalizing—creates continual opportunities for learning. These self-generated experiences allow children to adapt their learning in real time, a quality that is difficult to replicate in current AI models.

New tools deepen understanding of child development

Recent advancements in observational technology, including head-mounted eye-trackers and AI-assisted speech analysis, have allowed researchers to capture the nuanced ways children interact with their environment. Despite these gains in data collection, existing theories have struggled to account for how such information leads to fluent language use.


The study aims to bridge this gap by presenting a cohesive model that links child development processes with language acquisition. According to the authors, the advantage children have lies not in the quantity of data but in how they learn from it.

Broader implications for AI and cognitive science

The insights from this work extend beyond early language development. They may help guide future research in artificial intelligence, adult language processing and the evolution of communication.


While the study does not claim that machines can—or should—learn exactly as humans do, it suggests that current AI systems might benefit from more human-like learning environments. For instance, exposure to socially grounded, multisensory interactions could offer new paths for machine learning research.

 

Reference: Rowland CF, Westermann G, Theakston AL, et al. Constructing language: A framework for explaining acquisition. Trends Cogn Sci. 2025. Doi: 10.1016/j.tics.2025.05.015

This content includes text that has been generated with the assistance of AI. Technology Networks' AI policy can be found here.