I was among those who, when Khanmigo was first announced, were pretty excited about it's potential. I then waited for data on the results....and kept waiting.....and kept waiting. And now four years later this is apparently what we are going to get. I think that this is enough for me to decide that Khanmigo, regardless of whether or not a student actually engages with it, doesn't make much learning difference. At some point, the absence of (reported) data becomes data in itself.
I still believe, in principle, that AI tutors could be massively helpful for learning. But apparently we haven't yet figured out how to take that principle and turn it into reality.
It’s going to be quite hard to motivate students to learn now that they know answering can be automated.
I still remember when Khan Academy first came out, there was talk that teachers would go obsolete because teaching would become centralized and delivered over video.
Khan Academy to me is still just a YouTube channel trying very hard to be something more.
amazing in theory with the perfect user in the perfect use case,
misused in practice with terrible consequences for society at large.
Sure the one student who already excels, is motivated, understand what the concept to learn is, that actually completing exercises helps them to learn might, possibly, thrive. All other students, the vast majority, will try to "game" the (terrible) evaluation system to get good grades by cheating WHILE avoiding the very challenge that make the learning possible.Who could have guessed.
I thought Sal's revolution was the idea of flipping the script on primary school learning: in-class homework & at-home video lessons.
I'm not surprised. Students are not rewarded when they ask _curious_ questions--rather, they're admonished for not paying sufficient attention.
Personally, my first use of ChatGPT was to ask tangential questions on JavaScript while taking a LinkedIn learning course on VueJS. I found ChatAI an excellent substitute for Reddit and StackOverflow, which is how I would have followed these inquiries before. Of course, I'm not a primary-school-age learner. I had to learn _How To Learn_ from experience.
A para from from [0] makes it seem that students understand that LLM use doesn't lead to learning, but still do so. Do they not see effort put into learning worthwhile?
A few months ago, I overheard some college students talking about their classes.
One was complaining about an assignment they needed to do that night, and
another incredulously asked why they wouldn’t just have ChatGPT do it. The first
replied, “This is my major, I actually need to learn stuff in this class. I use
AI for my other classes.”
I myself use LLMs for learning (using ChatGPT's study mode for instance r.i.p)
and can see that there's a right way to use it—you reach for it when you hit a wall, not to avoid the friction of developing an understanding.From what I understand tho, most of LLM use for learning is just LLM used as a tool for cheating. Even tfa mentions something of the sort:
few of Musall’s most advanced students have taken advantage of AI to learn new
topics. But, as far as she can tell, more students are using it to just find
answers
The article attributes _skill issue_ as part of the problem, but how much of that
is a motivation or awareness issue. How do you make student realize that learning is worth it?[0] https://arstechnica.com/science/2026/04/to-teach-in-the-time...
>Unlike other AI tools such as ChatGPT, Khanmigo doesn’t just give answers. Instead, with limitless patience, it guides learners to find the answer themselves. In addition, Khanmigo is the only AI tool that is incorporated with Khan Academy’s world-class content library that covers math, humanities, coding, social studies, and more.
The first differentiation is literally just prompting (if at all). Nowadays you can tell any chatbot to behave that way. The second one may have been an edge before tool use was widely common, but with all chatbots now having access to the internet and code execution, it seems like this has also become a dud. This product was a nice idea on paper, but the fast technical evolution of the field has largely left it in the dust.
The issue in my country is that you equate education with getting a safe job. 20 years ago, you needed a high-school degree in social science to get a government job. 10 years ago you needed a bachelor in social sciences to get the same job. 5 years ago you needed a bachelor in economy/engineering to get the same job. Now, because of recessions this is stretching to masters degrees.
You can't expect people who just want a job and a comfortable life and NEED to go to uni for this to want to be curious and want to learn.
As other people have noted, asking a.k.a <i>typing</i> questions, especially math-type is fatiguing, and there's no substitute for pen and paper and thinking hard.
KA would be better off using AI on the supply side (but heavily curated) to have more assignments, or better assignments in some sections.
But it's important to recognize KA for what it is, and it's an excellent way to have some sort of a basic curriculum, especially when self-studying, and all of the instructors have great teaching personalities, as far as I can deduce from the approach in the videos.
Ignoring whether or not this is a good idea in the first place, what about inverting the loop? Have the robot drive the interaction.
An AI-based education system should have embedded in it "I am here to teach this person Geometry. Here is a list of the topics to cover, with a breakdown of steps for each including an intro section, a study section, a test section, and the meta material to go along with it.
That would work.
To give an example, I have a friend who learned system design through Claude in order to get a job interview (and he got really good at system design)while I have another friend who copies and paste ChatGPT responses in order to get a B on a reflection assignment.
This highlights that there is legit use case for personalized learning and growth via AI-but these are the people who seek knowledge with or without AI. Whereas the majority of students actively tries to do the least as possible on assignments even if they get 0 value out of it
His hottest take is we're already close to the optimal process for learning, so technology isn't going to improve it. Learning takes work, and no technology can do the work for you.
AI is great for the curious. But its not yet there where it can proactively engage with students to generate interest.
Dear Lord, how is this any different from Microsoft sticking Copilot or Google sticking Gemini in every single offering? They're literally saying that people aren't using the chat bot enough so they're going to force it on people inside the product.
That is a warning sign if ever there was one.
The biggest thing is motivation. First off, if Khanmigo requires them to type and read everything, that's going to get tiring fast for most kids. But I don't know how you could do voice in a school setting - mine uses STT/TTS, but with 20 kids in a room, it'd be chaos - STT accuracy and diarization with 2 is already really challenging.
Motivation is helped a bit by following their interest, but it seems like KA is having trouble guiding the kids when they prompt it that way. That was a pretty big issue with mine early on - the kids would talk to it for an hour about whatever topic they were interested in at the time, but it would never branch into something new.
The tutor I'm working on solves it by having a concept graph that covers a lot of learning, from the basics like math, dinosaurs, etc to other developmental topics like 6 year old boundary-pushing humor, and two LLM threads - one that handles the conversational turns, and another one in the background that strategizes and steers the conversational thread by looking at the concept graph connections and considering how ready they are for each, and then injecting steering notes into the conversational thread. Basically system 1 and system 2 thinking. And after sessions, it'll make a basic plan of where to start next time, and what might be interesting to offer up.
I mentioned this in another comment, but I've been really pleasantly surprised at the quality of the tutoring, especially when it bridges into new topics - one of my sons is really into slay the spire, and at different times it’s used that as a launching-off point into probabilities, decision trees, python code of the algorithms he thinks about as he's facing different enemies, and general strategies on different facets, and my other son was really into sharks, which it has bridged into extinct sharks like megalodon, how scientists derive how it looks given cartilage's lower propensity to fossilize, bridging to dinosaurs and their fossils, the K-PG extinction event, how food scarcity filtered for smaller animals like the ancestors of birds, and our small mammalian ancestors. And a whole bunch of other topics.
It's been pretty great in that way, but my biggest open question at the moment is how to get them to engage with it on their own on a more regular basis - they go to it occasionally for random questions, but to get good coverage of that huge knowledge graph would take much more. And fundamentally, I think that human engagement still just has a number of important aspects to it that it's lacking, and I'm not sure if it's possible to replace those well enough.
i think what should be taught is the metacognative ability - like how to retrieve knowledge, how to ask the right questions towards a certain goal. knowledge itself are easily accessible with ai. now the difficult part is the ability to discern actual knowledge from llm halucination bs, the ability to retrieve the required knowledge given a scenario.
this still requires some foundational grounding — you can't detect bullshit with zero context. but the balance shifts from memorization to retrieval, iteration, verification. honestly i think it is more about critical thinking and philosophy.
> “Students aren’t great at asking questions well.”
In my interactions with my kids public school and their teachers, they're goal is ram content down their throat and test for retention, not foster an environment open to questions
Had a teacher claim straight up they don't believe the system works and are just in teaching for benefits and summer vacation
IMO Sal Khan's revolution hasn't happened because the adults in charge right now are ignorant and inept but incredibly vain nonetheless
That said I do think it's particularly hilarious that KA's strategy to students not wanting to use the product is to make the product more integral to the experience.
Who would have thought?
…sounds a lot like Investors versus those who actually perform “work” as it’s defined in research literature.
But I’m sure a shoe company pivoting to AI isn’t a sign of a bubble about to burst, nope.