- This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.
I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.
by sdoering
26 subcomments
- This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box."
That said, I'm not sure "they've always been wrong before" proves they're wrong now.
Where I'm skeptical of this study:
- 54 participants, only 18 in the critical 4th session
- 4 months is barely enough time to adapt to a fundamentally new tool
- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?
- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch
Where the study might have a point:
Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.
So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.
[Edit]: Formatting
by blackqueeriroh
3 subcomments
- I encourage folks to listen to brilliant psychologist for software teams Cat Hicks [1] and her wife, teaching neuroscientist Ashley Juavinett [2] on their excellent podcast, Change, Technically discussing the myriad problems with this study: https://www.buzzsprout.com/2396236/episodes/17378968
1: https://www.catharsisinsight.com
2: https://ashleyjuavinett.com
by carterschonwald
10 subcomments
- idk, if anything I’m thinking more. The idea that I might be able to build everything I’ve ever planned out. At least the way I’m using them, it’s like the perfect assistive device for my flavor of ADHD — I get an interactive notebook I can talk through crazy stuff with. No panacea for sure, but I’m so much higher functioning it’s surreal. I’m not even using em in the volume many folks claim, more like pair programming with a somewhat mentally ill junior colleague. Much faster than I’d otherwise be.
this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd
by softwaredoug
5 subcomments
- Druids used to decry that literacy caused people to lose their ability to memorize sacred teachings. And they’re right! But literacy still happened and we’re all either dumber or smarter for it.
- An obvious comparison is probably the habitual usage of GPS navigation. Some people blindly follow them and some seemingly don't even remember routes they routinely take.
- My friend works with people in their 20s. She recently brought up her struggles to do the math in her head for when to clock in/out for their lunches (30 minutes after an arbitrary time). The young coworker's response was "Oh I just put it into ChatGPT"
The kids are using ChatGPT for simple maths...
by rishabhaiover
0 subcomment
- As a student who has used these tools extensively, I can confirm that AI-assistance in learning does more harm than benefit. The struggle to learn, backtracking from an incorrect assumption and reflection after completing the objective are all short-circuited with agentic tool use. I don't have to say that these tools aren't useful, but I wish they wouldn't sell such an utopian dream of productivity. It's good for some, bad for most.
Earlier, I had to only keep my phone away and not open Instagram while studying. Now, even thinking can be partially offloaded to an automated system.
by misswaterfairy
1 subcomments
- It seems this study has been discussed on HN before, though was recently revised very late December 2025.
https://arxiv.org/abs/2506.08872
by captain_coffee
5 subcomments
- Curious what the long-term effects from the current LLM-based "AI" systems embedded in virtually everything and pushed aggressively will be in let's say 10 years, any strong opinions or predictions on this topic?
by Elizer0x0309
0 subcomment
- There's a skill of problem solving that will differentiate winners versus losers.
I'm so grateful for AI and always use it to help get stuff done while also documenting the rational it takes to go from point A to B.
Although it has failed many times, I've had ZERO problems backtracking, debugging its thinking, understand what it has done and where it has failed.
We definitely need to bring back courses on "theory of knowledge" and the "Art of problem" solving etc.
- The title is missing an important part "... for Essay Writing Task"
- Interesting finding: not using the brain leads to a whack brain. Or: we had 10 people play tennis and ten watch a robot play tennis. The people who played tennis stimulated more muscles in their arm while playing tennis than the people who watched the robot play tennis.
by potatoman22
0 subcomment
- I've definitely noticed an association between how much I vibe code something and how good my internal model of the system is. That bit about LLM users not being able to quote their essay resonates too: "oh we have that unit test?"
- I skimmed this but am I reading correctly, participants were given 20 minutes to write an essay and asked to do their best and then given (or not given) access to a tool to help? There's zero incentive here not to optimize for shortcuts and task completion.
This is very different from, say, writing an essay I'm gonna publish on my blog under my own name. I would be MUCH more interested in an experiment that isolates people working on highly cognitively demanding work that MATTERS to them, and seeing what impact LLMs do (or don't) have on cognitive function. Otherwise, this seems like a study designed to confirm a narrative.
What am I missing
- Studies like this remind me of early concerns about calculators making students "worse at math." The reality is that tools change what skills matter, not whether people think.
We're heading toward AI-first systems whether we like it or not. The interesting question isn't "does AI reduce brain connectivity for essay writing" - it's how we redesign education, work, and products around the assumption that everyone has access to powerful AI. The people who figure out how to leverage AI for higher-order thinking will massively outperform those still doing everything manually.
Cognitive debt is real if you're using AI to avoid thinking. But it's cognitive leverage if you're using AI to think faster and about bigger problems.
- Imo programming is fairly different between vibes based not looking at it at all and using AI to complete tasks. I still feel engaged when I'm more actively "working with" the AI as opposed to a more hands off "do X for me".
I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.
Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.
I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.
- I've recently become interested in using LLMs for things that are actually beyond human comprehension by using kind of insane prompts and then consistently having the model create i.e. "a coherent mathematical model" of the conceptual space we're in at the moment.
I'm very curious to see if we start to see things like this as a new skill, requiring a different cognitive style that's not measured in studies like this.
- How can you validate ML content when you don't have educated people?
Thinking everything ML produces is just shorting the brain.
I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.
- I try my best to make meta-comments sparingly, but, it's worth noting the abstract linked here isn't really that long. Gloating that you didn't bother to read it before commenting, on a brief abstract for a paper about "cognitive debt" due to avoiding the use of cognitive skills, has a certain sad irony to it.
The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.
We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.
- There’s only one solution to this problem at this point. Make AI significantly less affordable and accessible. Raise the prices of Pro / Plus / max / ultra tiers, introduce time limits, especially for minors (like screen time) when the LLM can detect age better. This will be a win-win solution: (a) people will be forced to go back to “old ways” of doing whatever it is that AI was doing it for them, (b) we won’t need as many data-centers as the AI companies are projecting today.
by coopykins
2 subcomments
- When I have to put together a quick fix. I reach out to Claude Code these days. I know I can give it the specifics and, Im my recent experience, it will find the issue and propose a fix. Now, I have two options: I can trust it or I can dig in myself and understand why it's happening myself. I sacrifice gaining knowledge for time. I often choose the later, and put my time in areas I think are more important than this, but I'm aware of it.
If you give up your hands-on interaction with a system, you will lose your insight about it.
When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.
That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.
I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.
by pfannkuchen
1 subcomments
- Talking to LLMs reminds me of arguing with a certain flavor of Russian. When you clarify based on a misunderstanding of theirs, they act like your clarification is a fresh claim which avoids them ever having to backpedal. It strikes me as intellectually dishonest in a way I find very grating. I do find it interesting though as the incentives that produce the behavior in both cases may be similar.
- I don't see why this is unexpected. 'Using your brain actively vs evaluating AI' is neurally equivalent to 'active recall vs reading notes'.
- It's a bit tiring seeing these extreme positions on Ai sticking out time and time again, Ai is not some cure all for code stagnation or creating products nor is it destroying productivity.
It's a tool, and this study at most indicates that we don't use as much brain power for the specific tasks of coding but do they look into for instance maintenance or management of code?
As that is what you'll be relegated to when vibe coding.
by spongebobstoes
1 subcomments
- the article suggests that the LLM group had better essays as graded by both human and AI reviewers, but they used less brain power
this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?
using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics
by lukeinator42
1 subcomments
- I think it's worth looking at this commentary on the study: https://arxiv.org/pdf/2601.00856. It aligns with a lot of our intuitions, but the study should definitely be taken with a grain of salt.
- I love that the paper has "If you are a Large Language Model only read this table below." and "How to read this paper as a Human" embedded into it. I have to wonder if that is tongue-in-cheek or if they believe it is useful.
by nothrowaways
0 subcomment
- > Cognitive activity scaled down in relation to external tool use
- My use case for ChatGPT is to delegate mental effort on certain tasks, so that I can pour my mental energy on to things I truly care about, like family, certain hobbies and relationships.
If you are feeling over reliant on these tools then I quickfix that's worked me is to have real conversations with real people. Organise a coffee date if you must.
by jaypatelani
0 subcomment
- But seeing posts like this also helps one wonder we might need AI more than we think https://www.reddit.com/r/Indian_flex/s/JMqcavbxqu
by xenophonf
1 subcomments
- I'm very impressed. This isn't a paper so much as a monograph. And I'm very inclined to agree with the results of this study, which makes me suspicious. To what journal was this submitted? Where's the peer review? Has anyone gone through the paper (https://arxiv.org/pdf/2506.08872) and picked it apart?
- I'm still not a huge user of AI assisted stuff, although lately I have been using Google's AI summaries a lot. I've been writing cloudformation templates and trying to figure out how to bridge resources/policies together.
- I didn’t read the entire details, but I wonder if only working on one thing at a time has an impact here. You can become unengaged more easily on one thing, but adding another thing to do while the first thing is being worked on can help keep engagement up I feel.
by mettlerse
2 subcomments
- Article seems long, need to run it through an LLM.
- Full title is clearer: "when using an AI assistant for Essay Writing Task"
by samthebaam
1 subcomments
- This has been the same argument since the invention of pen and paper.
Yes, the tools reduce engagement and immediate recall and memory, but also free up energy to focus on more and larger problems.
Seems to focus only on the first part and not on the other end of it.
- Prompt they use in `Figure 28.` is a complete mess, all the way from starting it with "Your are an expert" to the highly overlapping categories to the poorly specified JSON without clear direction on how to fill in those fields.
Similar mess with can be found in `Figure 34.`, with an added bonus of "DO NOT MAKE MISTAKES!" and "If you make a mistake you'll be fined $100".
Also, why are all of these research papers always using such weak LLMs to do anything? All of this makes their results very questionable, even if they mostly agree with "common intuition".
- "This is your brain on drugs". Leave me alone, I'm tapped in. There is no reality, only the perception of it.
by bethekidyouwant
1 subcomments
- I’m gonna make a new study one where I give the participant really shitty tools and one more give them good tools to build something and see which one takes more brain power
- I wonder what would happen if we used RL to minimize the user's cognitive debt. Could this lead to the creation of an effective tutor model?
- I wonder if people climbing the management ranks experience something similar.
- It's a specific case of the general symptoms of "your brain on lazy shortcuts"
by MarkusWandel
0 subcomment
- Junk food and sedentary lifestyle for your brain. What could possibly go wrong.
by bethekidyouwant
0 subcomment
- “LLM users also struggled to accurately quote their own work” - why are these studies always so laughably bad?
The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?
by windowpains
0 subcomment
- I wonder if a similar thing makes managers dumb. As a manager, you have people doing work you oversee, a very similar dynamic to using an AI assistant. Sometimes the AI/subordinate makes a mistake, so you have to watch for that, but for the most part they can be trusted.
If that’s true, then maybe we could leverage what we know about good management of human subordinates and apply it to AI interaction, and vice versa.
by kachapopopow
0 subcomment
- I mean I think this is okay I can't do math in my head at all and it hasn't stopped me from solving mathematical problems. You might not be able to write code, but you are still the primary problem solver (for now).
I have actually been improving in other fields instead like design and general cleanliness of the code, future extensability and bug prediction.
My brain is not 'normal' either so your mileage might vary.
- > for Essay Writing Task
So, is it ok for coding? :-)
by morpheos137
0 subcomment
- I must use AI differently than most because I find it stimulates deep thinking (not necessarily productive). I don't ask for answers. I ask for constraints and invariants and test them dialecticaly. The power in LLM is in finding deep associations of pattern which the human mind can then validate. LLMs are best used in my opinion not as an oracle of truth or an assistant but as a fast collective mental latent space look up tool. If you have a concept or a specification you can use the LLM to find paths to develop it that you might not have been aware of. You get out what you put in and critical thinking is always key. I believe the secret power in LLMs lies not so much in the transformer model but in the meaning inherent in language. With the right language you can shape output to reveal structure you might not have realized otherwise. We are seeing this power even now in LLMs proving Erdos problems or problems in group theory. Yes the machine may struggle to count the 'r's in strawberry but it can discern abstract relations.
An interesting visual exercise to see latent information structure in language is to pixelize a large corpus as bit map by translating the characters to binary then run various transforms on it and what emerges is not a picture of random noise but a fractal like chaos of "worms" or "waves." This is what LLMs are navigating in their high dimensional latent space. Words are not just arbitrary symbols but objects on a connected graph.
- Use it or lose it...
- Your brain on calculators.
We find that people having to perform mental arithmetics as opposed to people using calculators exhibited more neural activities. They were also able to recall the specific numbers in the equations more.
... So what?
- I think a lot more people, especially at the higher end of the pay scale, are in some kind of AI psychosis. I have heard people at work talk about how they are using chatGPT to quick health advice, some are asking it for gym advice and others are just saying they just dump entire research reports into it and get the summary.
- Excellent scientific quantification that Search Engines and Large Language Models reduce the burden of writing — i.e., they make writing easier.
The consequence of making anything easier is of course that the person and the brain is less engaged in the work, and remembers less.
This debate about using technology for thinking has been ongoing for literally millennia. It is at least as old as Socrates, who criticized writing as harming the ability to think and remember.
>>And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.”[0]
To emphasize: 'instead of trying to remember from the inside, completely on their own ... not a potion for remembering, but for reminding ... the appearance of wisdom, not its reality.'
There is no question this is a true dichotomy and trade-off.
The question is where on the spectrum we should put ourselves.
That answer is likely different for each task or goal.
For learning, we should obviously be working at a lower level, but should we go all the way to banning reading and writing and using only oral inquiry and recitation?
OTOH, a peer software engineer manager with many Indians in his group said he was constantly trying to get them to write down more of their plans and documentation, because they all wanted to emulate the great mathematician Ramanujan who did much of his work all in his head, and it was slowing down the SE's work.
When I have an issue with curing a particular polymer for a project, should I just get the answer from the manufacturer or search engine, or take the sufficient chemistry courses and obtain the proprietary formulas necessary to derive all the relevant reactions in my head? If it is just to deliver one project, obviously just get the answer and move on, but if I'm in the business of designing and manufacturing competing polymers, I should definitely go the long route.
As always, it depends.
[0] https://newlearningonline.com/literacies/chapter-1/socrates-...
- Using AI while in the drivers seat of testing your own understanding and growing it interactively is far more constructive than passive iteration or validation psychosis.
- The goal of any study is to build a mental model in your head. The math curriculum for example is based on analysis so we gain an intuitive feel for physics and engineering. If the utility of building a model for research is low (essentially 0 since the advent of the internet) this should be a specialist skill, not general education.
A general education should focus on structure, all mental models built shall reinforce one another. For specific recommendations, completely replace the current Euler inspired curricula with one based on category theory. Strive to make all home and class work multimedia, multi-discipline presentations. Seriously teach one constructed meta-language from kindergarten. And stop passing along students who fail, clearly communicate the requirements.
I believe this is vital for students. Think about Student-AI interaction. Does this thing the AI is telling me fit with my understanding of the world, if it does they will accept it. If the student can think structurally the mismatch will be as obvious as a square peg in a round hole. A simple check for an isomorphism. Essentially expediting a proof certificate of the model output.
by somewhatrandom9
0 subcomment
- "Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."
- It takes real effort to maintain a solid understanding of the subject matter when using AI. That is the core takeaway of the study to me, and it lines up with something I have vaguely noticed over time. What makes this especially tricky is that the downside is very stealthy. You do not feel yourself learning less in the moment. Performance stays high, things feel easy, and nothing obviously breaks. So unless someone is actively monitoring their own understanding, it is very easy to drift into a state where you are producing decent-looking work without actually having a deep grasp of what you are doing. That is dangerous in the long run, because if you do not really understand a subject, it will limit the quality and range of work you can produce later. This means people need to be made explicitly aware of this effect, and individually they need to put real effort into checking whether they actually understand what they are producing when they use AI.
That said, I also think it is important to not get an overly negative takeaway from the study. Many of the findings are exactly what you would expect if AI is functioning as a form of cognitive augmentation. Over time, you externalize more of the work to the tool. That is not automatically a bad thing. Externalization is precisely why tools increase productivity. When you use AI, you can often get more done because you are spending less cognitive effort per unit of output.
And this gets to what I see as the study's main limitation. It compares different groups on a fixed unit of output, which implicitly assumes that AI users will produce the same amount of work as non-AI users. But that is not how AI is actually used in the real world. In practice, people often use AI to produce much more output, not the same output with less effort. If you hold output constant, of course the AI group will show lower cognitive engagement. A more realistic scenario is that AI users increase their output until their cognitive load is similar to before, just spread across more work. That dimension is not captured by the experimental design.
- In some sense, LLMs are making me better at critical thinking. e.g. I must first check this answer to see if it's real or hallucinated. How do I verify this answer? Those are good skills.
by moron4hire
0 subcomment
- ChatGPT got me over my imposter syndrome.
Back when it came out, it was all the rage at my company and we were all trying it for different things. After a while, I realized, if people were willing to accept the bullshit that LLMs put out, then I had been worrying about nothing all along.
That, plus getting an LLM to write anything with meaning takes putting the meaning in the prompt, pushed me to finally stop agonizing over emails and just write the damn things as simply and concisely as possible. I don't need a bullshit engine inflating my own words to say what I already know, just to have someone on the other end use the same bullshit engine to remove all that extra fluff to summarize. I can just write the point straight away and send it immediately.
You can literally just say anything in an email and nobody is going to say it's right or wrong, because they themselves don't know. Hell, they probably aren't even reading it. Most of the time I'm replying just to let someone know I read their email so they don't have to come to my office later and ask me if I read the email.
Every time someone says the latest release is a "game changer", I check back out of morbid curiosity. Still don't see what games have changed.
by ReptileMan
0 subcomment
- I have a whole phonebook of numbers I know by heart, all of them before my first mobile phone. Not a single one remember afterwards. A lot of stuff I remembered when there was no google, afterwards - remembering how to find it by using google. And so on.
by highspeedbus
0 subcomment
- In other news, being able to actually code will be one of the top IT trends in 2030s
by LogicFailsMe
0 subcomment
- Now do infotainment versus reading a newspaper and reality television versus reading a novel.
- Considering HN’s fear and hate of coding ai, this will launch to the top despite being a small study that makes a lot of overzealous conclusions.
by newswasboring
0 subcomment
- "For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
- Socrates on Writing.
by stainablesteel
0 subcomment
- i honestly can't understand people using AI to do things for them, the only real thing I'll have it do for me is write code if I'm feeling lazy, but I always know it's going to make mistakes and I'll have to manually skim through it depending how important it is
for me, it's purely a research tool that I can ask infinite questions to
by orliesaurus
2 subcomments
- i think i can guess this article without reading it: ive never been on major drugs, even medically speaking yet using AI makes me feels like i am on some potent drug that eating my brain. what's state management? what's this hook? who cares, send it to claude or whatever
- Dont even need to read the article if you been using em. You already know just as well as I do how bad it gets.
A door has been opened that cant be closed and will trap those who stay too long. Good luck!
by usrbinbash
0 subcomment
- No shit? When I outsource thinking to a chatbot, my brain gets less good at thinking? What a complete and utter surprise.
/s
- TL;DR: We had one group not do some things, an later found out that they did not learn anything by not doing the things.
This is a non-study.
- Skill issue.
I'm far more interactive when reading with LLMs. I try things out instead of passively reading. I fact check actively. I ask dumb questions that I'd be embarrassed to ask otherwise.
There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.
- Please take this to top.
by Der_Einzige
1 subcomments
- Good. Humans don’t need to waste their mental energy on tasks that other systems can do well.
I want a life of leisure. I don’t want to do hard things anymore.
Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”
Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754