Anyway, the "we have AI, so will be soon no more things to discover" is similar to what was thought at the end of the XIX century that everything was discovered and only increasing precision was left. At the very least, we have a lot of learning about ourselves and how we understand reality, in the light of what AI could uncover with different methods than the traditional ones.
I found it intellectually reprehensible then, and now.
This is not entirely new. For example, we had working (if inefficient) steam engines and pumps long before the development of thermodynamics. We had beer and cheese long before microbiology.
A while ago I read "Against Method" by Paul Feyerabend and there's a section that really stuck with me, where he talks about the "myth" of Galileo. His point is that Galileo serves as sort of the mythological prototype of a scientist, and that by picking at the loose ends of the myth one can identify some contradictory elements of the popular conception of "scientific method". One of his main points of contention is Galileo's faith in the telescope, his novel implementation of bleeding edge optics technology. Feyerebend argues that Galileo invented the telescope as primarily a military invention, it revolutionized the capabilities of artillery guns (and specifically naval artillary). Having secured his finances with some wealthy patrons, he then began to hunt for nobler uses of his new tool, and landed on astronomy.
Feyerabend's point (and what I'm slowly working up to) is that applying this new (and untested) military tool to what was a very ancient and venerable domain of inquiry was actually kind of scandalous. Up until that point all human knowledge of astronomy had been generated by direct observation of the phenomenon; by introducing this new tool between the human and the stars Galileo was creating a layer of separation which had never been there before, and this was the source of much of the contemporary controversy that led to his original censure. It was one thing to base your cosmology on what could be detected by the human eye, but it seemed very "wrong" (especially to the church) to insert an unfeeling lump of metal and glass into what had before been a very "pure" interaction, which was totally comprehensible to the typical educated human.
I feel like this article is expressing a very similar fear, and I furthermore think that it's kind of "missing the point" in the same way. Human comprehension is frequently augmented by technologly; no human can truly "understand" a gravitational wave experientially. At best we understand the n-th order 'signs' that the phenomenon imprints on the tools we construct. I'd argue that LLMs play a similar role in their application in math, for example. It's about widening our sensor array, more than it is delegating the knowledge work to a robot apprentice.