Some good sources on the matter in general:
https://kevinkelly.substack.com/p/the-singularity-is-always-...
https://idlewords.com/talks/superintelligence.htm
> In particular, we may distinguish between a person-affecting perspective, which focuses on the interests of existing people, and an impersonal perspective, which extends consideration to all possible future generations that may or may not come into existence depending on our choices.
In philosophy, it's always fine to see where ideas lead. For the rest of us, though, we might take pause here because the "person-affecting" perspective is insane in this context. It gives full moral weight to whether you make things better or worse for people who happen to be alive right now -- but no moral weight at all to whether you leave a world that's better or worse for people who will be born any time after right now. Wanna destroy the biosphere or economy in a way that only really catches up to tomorrow's kids? Totally fine from the "person-affecting perspective", because in some technical sense, no individual was made worse off than they were before. They were born into the mess, so it's not a problem.
I don't think this is the case. And if Bostrom and whoever else in his clique actually wanted to empower intelligence, how come they aren't viciously fighting for free school, free food, free shelter, free health care and so on, to make sure that intelligent people, especially kids, do not go to waste?
Quite puzzling also he wouldn't even refer to his earlier work to refute it, given that he wrote THE book on the risk of superintelligence.
Good philosophers focus on asking piercing questions, not on proposing policy.
> Would it not be wildly irresponsible, [Yudkowsky and Soares] ask, to expose our entire species to even a 1-in-10 chance of annihilation?
Yes, if that number is anywhere near reality, of which there is considerable doubt.
> However, sound policy analysis must weigh potential benefits alongside the risks of any emerging technology.
Must it? Or is this a deflection from concern about immense risk?
> One could equally maintain that if nobody builds it, everyone dies.
Everyone is going to die in any case, so this a red herring that misframes the issues.
> The rest of us are on course to follow within a few short decades. For many individuals—such as the elderly and the gravely ill—the end is much closer. Part of the promise of superintelligence is that it might fundamentally change this condition.
"might", if one accepts numerous dubious and poorly reasoned arguments. I don't.
> In particular, sufficiently advanced AI could remove or reduce many other risks to our survival, both as individuals and as a civilization.
"could" ... but it won't; certainly not for me as an individual of advanced age, and almost certainly not for "civilization", whatever that means.
> Superintelligence would be able to enormously accelerate advances in biology and medicine—devising cures for all diseases
There are numerous unstated assumptions here ... notably an assumption that all diseases are "curable", whatever exactly that means--the "cure" might require a brain transplant, for instance.
> and developing powerful anti-aging and rejuvenation therapies to restore the weak and sick to full youthful vigor.
Again, this just assumes that such things are feasible, as if an ASI is a genie or a magic wand. Not everything that can be conceived of is technologically possible. It's like saying that with an ASI we could find the largest prime or solve the halting problem.
> These scenarios become realistic and imminent with superintelligence guiding our science.
So he baselessly claims.
Sorry, but this is all apologetics, not an intellectually honest search for truth.
I have bad news about how decision makers have responded to risks about nuclear weapons and climate change in the past. During the development of the bomb, it was thought that the initial test had a small but plausible, at least to some but not all scientists, chance of igniting the atmosphere on fire in a chain reaction. It was thought that threatening and destroying enemies was worth the risk.
Let us not speak of the risks of MAD (for a treat, watch the British movie "Threads") and the tipping points of climate catastrophe which consistently appear to be worse than the IPCC reports with new surprises every few years.
Of course, no such risk is worth taking to the average person. It only makes sense in an extremely narrow hypercompetitive viewpoint held by elites and dumb dumbs.
wtf? death is part of life. is he seriously arguing that if we don't build AGI people will "keep dying"? and suggesting that is equally bad as extinction (or something worse, matrix-like)?
i don't think life would be as colorful and joyful without death. death is what makes life as precious as it is.