I have found that it is much better at answering questions if you start with code it wrote instead your own code or someone else's code, so I boil my question down to a simple programming task and start by having it write that code. For example, there were some things I was unsure about with VMs/bytecode interpreters/compilers, so I started my session by asking ChatGPT to write me a simple Forth VM in C and then used that as the jumping off point.
With coding, results are often proportionate to the quality, thoroughness and structure of my prompts. And much debugging.
With intellectual pursuits, any output of interest or previously unknown nature must be verified. I find the farther I push past the safety rails (always ethically, but often harsh) the greater the inclination to hallucinate, appease/placate or evade through reframing, detours and actually quite a number of clever but counterproductive maneuvers.
But I'd say, have no worries. Critical Thinking to the LLM is copper to the cable. It simply doesn't work without it. And because of this, an unintended benefit comes from exercising one's attentiveness and scrutiny. Just keep in mind that a Pandora's Box exists behind scrutiny. Depending on one's level of patience and persistence, this alone is a deep and fascinating pursuit in itself.
The only way AI will kill a skill is the same as anything else would. It's the underlayment, ie indolence, naivety, etc. For the generally aware, it's a force multiplier and for me, new form of existential mirror.
Request quizzes or hints before showing the answer. Have it go through a Socratic dialog style, where you can reply, "Figured it out" to abort.
I think naturally some output from AI will be incorrect and this forces people to question the output constantly, so I'm not sure if there is much actual current risk of such a loss of skills
I think the answer is, you cannot.
John Henry