When I finally read it, I found it remarkably balanced. It cites positives and negatives, all of which agree with my experience.
> Con: AI poses a grave threat to students' cognitive development
> When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction.
None of this is controverisal. It happens without AI, too, with kids blindly copying what the teacher tells them. Impossible to disagree, though.
> Con: AI poses serious threats to social and emotional development
Yep. Just like non-AI use of social media.
> Schooling itself could be less focused on what the report calls "transactional task completion" or a grade-based endgame and more focused on fostering curiosity and a desire to learn
No sh*t. This has probably been a recommendation for decades. How could you argue against it, though?
> AI designed for use by children and teens should be less sycophantic and more "antagonistic," pushing back against preconceived notions and challenging users to reflect and evaluate.
Genius. I love this idea.
=== ETA:
I believe that explicitly teaching students how to use AI in their learning process, that the beautiful paper direct from AI is not something that will help them later, is another important ingredient. Right now we are in a time of transition, and even students who want to be successful are uncertain of what academic success will look like in 5 years, what skills will be valuable, etc.
It's perhaps to be expected, as these education people are usually non-technical. But it's definitely concerning that (once again) a lack of technical and media literacy among these education types will lead to them letting (overall) unhelpful tech swarm the system.
The one off stuff is mostly taking a picture of a math problem and asking it to walk step by step through the process. In particular this has been helpful to me as the processes and techniques have changed.
It's been useful in foreign languages as well to rapidly check work, and make corrections.
On the generative side it's fantastic for things like: give me 3 more math problems similar to this one or for generating worksheets and study guides.
As far as technological adoption goes, it's 100% that every kid knows what ChatGPT is (even maybe more than just "AI" in general). There's some very mixed feelings from the kids with it: my middle schooler was pretty creeped out by the ChatGPT voice interface for example.
If the AI guesses (hallucinates), it's dangerous. If the AI proves (verifies), it's a superpower.
I refused to let my own tools hallucinate anymore, so I implemented *Bounded Recall* using Optimal Transport. My local stack now operates with a ~0.02% error rate compared to the 12% standard.
We solved the engineering problem—we just need to stop deploying the "lazy" version of AI to schools. When you fix the memory layer, the risk evaporates.
Average student even in universities is functionally illiterate now, it's not an exaggeration. Even if we assume that there is LLM which would help to learn, how these students should use it?
https://hilariusbookbinder.substack.com/p/the-average-colleg...
I believe this is at the heart of the issue. If what is taught is mostly solving problems that require nothing more than rote memory or substituting values into memorized equations, then yes, students will use LLMs.
I agree some level of this brain dead work is necessary to build muscle/mental memory. However I believe if this is all they learn, they will be unprepared for university as at that level the problems poised will challenge why they are using that equation or if the problem is even solvable.
A1 should not be in every classroom.
Furthermore any books or teaching that does not feature medium rare as the correct cooking of a steak should be banned (and burned to well done).
For studying, LLMs feel Like using a robot to lift weights for you at gym.
——
If people used to get cardio as a side effect of having to walk everywhere, and we were forced to think as a side effect of having to actually do the homework, then are LLMs ushering in an era of cognitive ill health ?
For what it’s worth, I spend quite a bit of effort to understand how people are using LLMs, especially non-tech people.
> "We know that richer communities and schools will be able to afford more advanced AI models," Winthrop says, "and we know those more advanced AI models are more accurate. Which means that this is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."
... and am somehow reminded of the movie Gattaca.
I've only skimmed it, but I note that all this research is before Nov 2025 and is quite broad. It does get some into coding, mentioning GitHub CoPilot and also refers to a paper about vibe-coding, where the conclusion is that not understanding the artifacts is a problem.
So all this reporting is before Gemini 3 and Opus 4.5 came out. Everything is really different with the advent of that.
While substitute teaching just before Xmas 2025, I installed Antigravity on the student account of the class computer and vibe-coded two apps on the smart board while the kids worked on Google Classroom. This was impromptu, to liven up things, but I knew it would work because I had such amazing experiences with the tool the week before.
* [1] Quadratic Formula Explorer for Algebra 2
* [2] Proving Parallelograms for Honors Geometry
Before the class ended, I then gave a quick talk the gist was: "I just made these tools to understand the coursework by conversing with an LLM. Are you going to use this to cheat on your homework or to enhance your understanding?"
I showed it to a teacher and then she pointed me to existent tools like them on educational web sites. But that was missing the point that we can just manifest the very hyper-specific tools we need... for example how should the Quadratic Formula Explorer work for someone with dyslexia?
I'm not sure what the next steps with all this is, but certainly education needs to adapt. The paper notes "AI can enrich learning when well-designed and anchored in sound pedagogy" and what I did there is neither, so imagine how sweet it is gonna be when we weave this into educational systems by skilled curriculum designers.
[1] https://conacademy.github.io/quadratic_explorer/ [2] https://conacademy.github.io/proving_parallelograms/
Wait, but organizing and expressing your thoughts IS writing. If you don’t make them do the work why bother sending them to school at all.
AI has a great niche place in schools: searching the library. The rest of this seems dumb.
https://www.nytimes.com/2025/10/29/opinion/ai-students-think...
Taking the first example, if you’re an artist worried about AI replacing you, you need to start your thinking from a position of AI is absolutely going to make the “I can create an image” part of your value proposition worthless. Yes, a massive fraction of what you might have been able to get paid and recognized for in the past is now utterly irrelevant. Pleading with the public to not use AI, protesting, demanding legislation, praying - none of it will stop this reality from coming to be in your lifetime, probably within a few years at most.
I see a lot of comments and articles that don’t seem to understand this at all. They think there’s some way we can slow the adoption of AI in areas we think it’s harmful, or legislate a way into a desirable future, or whatever. They’re wrong. Whatever the future holds for us, it’s one where AI will be absolutely everywhere and massively disrupt society and industry as it exists today. Start your planning from that reality or you’re going to get blindsided.
Bloom's paradox is well known and proven in education.
AI is the first thing that can positively personalize education and instruction and provide support to instructors.
The authors seem of limited technical literacy to know that you can just train and focus only on textbooks, instead of their explorations using general models and the pitfalls that they have. Not knowing this key difference affects some of the points being made.
The intersection of having a take on technology needs some semblance of digital and technical literacy involved in the paper to help acknowledge or navigate it, or it become a potential blind spot.
It takes legitimate concerns and ironically explores them in average ways, much like an llm returns average text for vague or incomplete questions.
There will however be a gigantic gulf between kids who use AI to learn vs those who use AI to aid learning
Objective review of Alpha school in Austin: