> 1. Democratization. We will resist the potential of this technology to consolidate power in the hands of the few.
For example they could publish their models and research... instead of doing the opposite of what they claim being their very first principle.
> We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
What do people think is the probability that OpenAI would ever actually do this?
Do I even need to read their article?
No, I don't.
Change my mind. I just had King's day in the Netherlands so maybe I'm too alcohol fueled, but I think I have the right amount of alcohol to call Sam Altman and the rest of the leadership out on it. They don't have principles, not good ones anyway.
I can't believe it has to be said. Yet, here we are. Nice to haves include: "We will not participate in the use of AI for mass surveillance," and "We will not participate in the use of AI for (cyber-)warfare."
* This will change anytime we want, whether you agree or not
* For employees who are following current "principles", when we change, if you are strict about principles, then please leave, we will hire new people
Help me imagine it, what are some examples of widespread flourishing we can look forward to?
Superintelligence aside, power in the present is already held by a small handful of companies, at least in the west. The principles are pretty good, vacuous though they may be.
And this won’t happen.
So another “do no evil” bla-bla which will ultimately be dropped
- Democratization. Why is it your prerogative sam bro? In other words, what he means is consolidate access so "We" can democratize. We choose who gets what.
- Empowerment. People are empowered by default. Its the totalitarians who curtail the empowerment. The fact that sam things he has the power to "empower" people is arrogant at best. People are empowered already, you just build the tools and make the tools accessible at a reasonable price.
- Universal prosperity. This one pisses me off the most. Who TF made you the benevolent mayor of universe? Are you running for president of the universe and people ask: Hey sam, what would you do as a president of the universe? "I will bring universal prosperity".. yaaaay Sama for president. FFS!
- Adaptability Yep. we'll kiss the ring of whoever is in power, until we get in power. then we will adapt if needed to your needs.
You know who else has principles: Meta. (https://www.meta.com/about/company-info/?srsltid=AfmBOooT6i0...)
- Give people a voice. Read: ensure you control their voice. My take: who tf are you to give anyone a voice? Everyone HAS a voice.
- Build connection and community. Read: ensure that you control all the connections and communities so that you can steer elections and other important things. My take: people have been connecting already for thousands of years.
- Serve everyone Read: control who you serve. My take: Serve everyone, except for totalitarian regimes and people with ideas that are not aligned with ours.
etc. etc.
However, I believe a lot of this is contingent on "things will be so prosperous we will figure out the hard stuff later". One major thing happening now is the nature of AI enabling better AI is that the improvements and advances concentrate the gains among fewer and fewer people. The AI boom has minted a handful of deca-billionaires, while millions lose their job or can't compete in this winner-takes all world.
Of course universal basic income would be more feasible in a world enhanced by AI productivity, but in the meantime, the trend is "A few people get very very very rich and everyone else enters the lottery of circumstance over whether the chaos caused by AI will land them in a better or worse position."
What evidence do we have that this trend won't continue into this future of "universal prosperity"? Will current OpenAI employees and tech CEOs essentially become permanent dynasties, lording over empires of autonomous robots while the average person gets to share one? (Universal prosperity cannot change the amount of rare-earth minerals on the planet). Of course a space-faring asteroid mining future solves this, but not right away.
If the intention is to bury all that then I think it's going to have the exact opposite effect and make everyone remember.
Consent, harassment between teens, etc. are the cited reasons, I guess.
Show, don't tell.
Snatching this contract with the military was not a good sign of things to come from OpenAI or Sam Altman.
The documented patterns of lies and manigances is real.
Altman's 'beliefs' in his response to the moltov cocktail
https://blog.samaltman.com/2279512 (https://news.ycombinator.com/item?id=47724921)
1. Democratization is centralization. We will resist the potential of this technology to consolidate power in the hands of the few, by consolidating it in the hands of us, who are not few but correct.
2. Empowrment is compliance. We believe AGI can empower everyone to achieve the goals we have determined are worth achieving.
3. Prosperity is scarcity. We want a future where everyone can have an excellent life, which will require new economic models because the old ones will no longer function, for reasons unrelated to us.
4.Resilience is dependence. AGI will introduce new risks, which only AGI can solve, which only we can build.
5. Adaptability is revisionism. We continue to believe the only way to meet the challenges of an unpredictable future is to be prepared to update our positions, our charter, our nonprofit status, our safety commitments, our board, our cofounders, and our prior statements, all of which were operative at the time and are now inoperative and were never said.
I can definitely see how it is going to create a lot more value to society.
Principals my hole.
OpenAI is a business based on stolen work run by a man that’s busy stealing scans of people’s eyes.
Actions speak louder than words.
"Our guiding principle is to make as much money as possible at the expense of absolutely everything and everyone on the planet"
Of course, you could paste that into basically any corps mission statement or values page and it wouldn't be out of place.
...as well as the potential to significantly worsen many aspects of society
Like who is the intended audience and what purpose does this serve?
I can't imagine that this will have the same powerful effect that Google's 'don't be evil' stuff did all those years ago.
People are just too cynical and have enough experience being burned by big tech companies. You might think that I'm speaking from a place of age and experience but I think this applies to everyone, young and old -- we're all using these devices and services from the cradle now it seems and we've all been burned by them or know someone who has been burned by them -- kids know the big tech rug pull just like they know they rug that they crawl on while sucking on a pacifier.
So what's the point of this? Is the intended audience internal? Like is it just for the people who work at openai to distract them from the news the stories that they hear in the news about their companies and the stuff they hear people say about them in social gatherings before they admit that they work for openai?
All the major AI shops are out trying to be the king of the jungle -- I don't think there can be a market in the end for all of them to be worth 2T+ giants.