One thing I kind of argument I missed in the article however, is that I would also apply this learning and struggling to LLMs themselves. There is knowledge to be gained on how to use LLMs as a tool. Maybe similar how you can use Google or a calculator as a tool, while not losing the understanding yourself. I do think the chance of knowledge loss is more pronounced for LLMs as they can cover so much.
I also think Bob (from the article) will likely learn better how to use LLMs than Alice. Sure he might not understand physics as deeply as Alice, but he might learn how to write papers very well with LLMs that get accepted for conferences and that build his career.
The author is a bit naive here:
1. Society only progresses when people are specialised and can delegate their thinking
2. Specialisation has been happening for millenia. Agriculture allowed people to become specialised due to abundance of food
3. We accept delegation of thinking in every part of life. A manager delegates thinking to their subordinates. I delegate some thinking to my accountant
4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted
The author just focuses on some local problems like skill atrophy but does not see the larger picture and how specific pattern has been repeating a lot in humanity's history.