← Back to feed
7

Study Claims AI Assistance Degrades Cognitive Persistence After ~10 Minutes

Research1 source·1d ago

Summary

  • • Multi-university preprint claims AI assistance impairs cognitive persistence after just ~10 minutes
  • • Across 1,200+ participants, removing AI access caused performance and perseverance to drop sharply
  • • Hint-based AI use preserved independent thinking better than prompting for full answers
  • • Researchers warn of a 'boiling frog' effect where cognitive erosion compounds invisibly over years
Adjust signal

Details

1.Research

Three controlled experiments spanning math and reading tasks

Study 1: ~350 Americans solved fraction equations with or without a GPT-5-based chatbot; AI was cut off mid-exam. Study 2: ~670 participants completed a math reasoning test. Study 3: ~200 participants completed reading comprehension. All three replicated the same pattern of performance and persistence decline.

2.Stat

1,220+ total participants; effects emerged after ~10 minutes of AI use

The combined sample gives statistical weight to the findings. The study directly quotes: 'After just [about] 10 minutes of AI-assisted problem-solving, people who lost access to the AI performed worse and gave up more frequently than those who never used it.' The study has not yet undergone peer review, a material caveat given its causal claims.

3.Research

Participants stopped trying, not just answering incorrectly

UCLA assistant professor Rachit Dubey: 'Once the AI is taken away from people, it's not that people are just giving wrong answers. They're also not willing to try without AI.' The effect is a drop in motivation and persistence, not merely a skill gap.

4.Insight

Mode of AI use matters: hints vs. full answers

Participants who asked the chatbot for hints or clarification fared better after losing AI access than those who prompted for complete answers. This points to interaction design as a meaningful mitigation lever for developers building AI tools.

5.Context

Preprint only; peer review pending

The study has not yet been peer-reviewed. It was reported by Futurism based on the preprint. Practitioners should treat findings as a credible signal worth monitoring rather than settled science.

Research = study design or finding, Stat = quantitative result, Insight = nuanced or design implication, Context = important caveat or background

What This Means

For AI practitioners and product teams deploying AI tools in educational or knowledge-work contexts, this study raises a concrete design question: whether tools that maximize immediate output may systematically undermine the user capabilities they depend on over time. The finding that hint-based AI use fares better than answer-generation points toward scaffolding as a design principle — building AI that supports thinking rather than replacing it. The study is preprint-only, so findings should be treated as a serious signal worth monitoring, not settled science.

Sources

Similar Events