I really do think that people should be careful about what they say in public and measure their words. And further, I think that the author of that book ought to be silent on that particular subject.
As it reads now, I'm not sure if this is an objective critic of EA or gripes of someone who orbited in the same social space having a public fallout.
They learned the wrong lesson from Death Note
Do these people not understand that crops need water? Higher temperatures mean higher evaporation rates. Vast swathes of Iran have become inhospitable due to water mismanagement. That will lead to millions of refugees fleeing the country. Climate change is like poverty in this respect. If you're poor in water, you can't afford to make any mistakes.
Longtermism is a curse to long term thinking. You're not allowed to think about the next ten thousand years of humanity, because apparently that's too short of a window.
Not just that. This type of thinking is a contradiction of optimal control theory. Your model needs to produce an uninterrupted chain from the present to the future. Longtermism chops the present off, which means the initial state is in the future. You end up with an unknown initial state to which the Longtermists then respond by with hacks: They are adding a minimal set of constraints back. That minimal set is the avoidance of extinction, which is to say they are fine with almost everything.
Based on that logic, you'd think that Longtermists would be primarily concerned with colonizing planets in the solar system and building resilient ecosystems on earth so that they can be replicated on other planets or in space colonies, but you see no such thing. Instead they got their brains fried by the possibility of runaway AI [0] and the earth is treated as a disposable consumable to be thrown away.
[0] The AI they worry about is extremely narrow. Tesla doors that can't be opened in an emergency due to battery loss don't count as runaway AI, but if you had to beg the Tesla car AI to open the door and the AI refused, that would be worthy of AI safety research. However, they wouldn't see the problem in the inappropriate use of AI where it shouldn't be used in the first place.