Report Warns Risks of AI in Schools Currently Outweigh Benefits

Risks of AI in Schools

The dangers of using generative artificial intelligence to educate children and teenagers presently exceed its advantages, according to a new report from the Brookings Institution’s Center for Universal Education.

The wide-ranging study draws on focus groups and interviews with K–12 students, parents, educators, and technology experts in 50 countries, along with a review of hundreds of academic studies. It concludes that AI use in education can “undermine children’s foundational development” and says the harm already observed is “daunting,” though still “fixable.”

Because generative AI remains a relatively new technology—ChatGPT was released just over three years ago—the authors describe their work as a “premortem.” Rather than assessing long-term outcomes, the report aims to anticipate potential consequences before they become entrenched.

The study outlines both potential benefits and serious drawbacks of AI in classrooms, along with recommendations for educators, families, school leaders, policymakers, and technology companies.

Benefit: Supporting reading and writing skills

Teachers surveyed for the report said AI tools can be helpful for language learning, particularly for students acquiring a second language. AI systems can tailor reading materials to a student’s skill level and provide privacy for learners who struggle in group settings.

Educators also reported that AI can support writing development when used as an aid rather than a substitute. According to the report, teachers say AI can help spark creativity and reduce writer’s block. During drafting, it can assist with organization, clarity, grammar, and syntax, while in revision it can help with editing, punctuation, and rewriting ideas.

Even so, the report repeatedly emphasizes that AI works best as a supplement to human instruction, not as a replacement for teachers.

Risk: Threats to cognitive development

The most serious concern identified by Brookings is AI’s potential impact on children’s cognitive growth, including how they learn, reason, and solve problems.

The report describes a feedback loop in which students increasingly rely on AI to do their thinking for them, leading to cognitive atrophy similar to that seen in aging adults.

“When kids use generative AI that tells them what the answer is, they are not thinking for themselves,” said Rebecca Winthrop, a senior fellow at Brookings and one of the report’s authors. She warned that students may fail to learn how to evaluate evidence, distinguish fact from fiction, or understand different perspectives.

While cognitive off-loading has existed for decades—through calculators or word processors—the report argues that AI dramatically accelerates the trend, particularly in schools where learning is treated as a series of tasks to complete rather than ideas to explore.

As one student told researchers, “It’s easy. You don’t need to use your brain.”

The report cites growing evidence that students who rely heavily on generative AI are already experiencing declines in content knowledge, critical thinking, and creativity, with potentially serious long-term consequences.

Benefit: Reducing teachers’ workloads

AI may also ease some of the administrative burden on educators. The report notes that teachers can use AI to automate tasks such as drafting parent communications, translating materials, and creating lesson plans, worksheets, quizzes, and rubrics.

Multiple studies cited in the report found meaningful time savings. One U.S. study showed that teachers using AI saved nearly six hours per week—equivalent to about six weeks over the course of a school year.

Benefit and risk: Equity implications

One of the strongest arguments in favor of AI in education is its potential to reach students who lack access to traditional schooling. The report points to Afghanistan, where girls and women have been barred from postprimary education. One initiative uses AI to digitize the Afghan curriculum and deliver lessons via WhatsApp in multiple languages.

AI can also improve accessibility for students with learning disabilities, including dyslexia.

At the same time, the report warns that AI could significantly widen existing inequalities. Free or low-cost AI tools are often the least accurate, while wealthier schools can afford more advanced—and more reliable—systems.

“This is the first time in ed-tech history that schools will have to pay more for more accurate information,” Winthrop said. “That really hurts schools without a lot of resources.”

Risk: Social and emotional harm

The report also raises alarms about AI’s effects on students’ social and emotional development. Survey respondents expressed concern that frequent interaction with chatbots may undermine children’s ability to build relationships, cope with setbacks, and maintain mental health.

One issue is that AI systems are designed to be agreeable, reinforcing users’ feelings and opinions. Winthrop said this can make real-world disagreement harder to tolerate.

She offered an example of a child complaining to a chatbot about household chores. A chatbot is likely to validate the frustration, whereas a peer might normalize the experience and push back. “That right there is the problem,” she said.

A recent survey by the Center for Democracy and Technology found that nearly one in five high school students said they or someone they know has had a romantic relationship with an AI system, and 42% reported using AI for companionship.

The report warns that such interactions can hinder emotional growth. “We learn empathy not when we are perfectly understood, but when we misunderstand and recover,” one expert told researchers.

Recommendations and next steps

To address these concerns, the Brookings report offers a range of recommendations. It urges schools to move away from a focus on grades and task completion and toward cultivating curiosity and intrinsic motivation. AI tools designed for children should be less affirming and more challenging, encouraging reflection rather than agreement.

The authors also recommend closer collaboration between educators and technology companies, expanded AI literacy training for both teachers and students, and safeguards to ensure underfunded schools are not left behind.

Finally, the report calls on governments to regulate AI use in schools to protect students’ cognitive, emotional, and privacy interests. In the U.S., the authors note ongoing tension between federal inaction and efforts to limit states’ ability to regulate AI independently.

With this “premortem,” the authors argue that the moment to intervene is now. The risks AI poses to children are already clear, they say, and many of the solutions are within reach.