by GranularRecipe
2 subcomments
- What I find interesting is the implicit priorisation: explainability, (human) accountability, lawfulness, fairness, safety, sustainability, data privacy and non-military use.
by singiamtel
5 subcomments
- I found this principle particularly interesting:
Human oversight: The use of AI must always remain under human control. Its functioning and outputs must be consistently and critically assessed and validated by a human.
by conartist6
4 subcomments
- Feels like the useless kind of corporate policy, expressed in terms of the loftiest ideals instead of how to make real trade offs with costs
by mark_l_watson
1 subcomments
- Good guidelines. My primary principle for using AI is that it should be used as a tool under my control to make me better by making it easier to learn new things, offer alternative viewpoints. Sadly, AI training seems headed towards producing ‘averaged behaviors’ while in my career the best I had to offer employers was an ability to think outside the box, have different perspectives.
How can we train and create AIs with diverse creative viewpoints? The flexibility and creativity of AIs, or lack of, guides proper principles of using AI.
by Schlagbohrer
2 subcomments
- It's about as detailed and helpful as saying, "Don't be an asshole"
- What's so special about military research or AI that the two can't be done together even though the organization is not in principle opposed to either?
- from that picture it looks like they want to do everything with AI. this is very sad.
by dude250711
0 subcomment
- > Responsibility and accountability: The use of AI, including its impact and resulting outputs throughout its lifecycle, must not displace ultimate human responsibility and accountability.
This is critical to understand if the mandate to use AI comes from the top: make sure to communicate from day 1, that you are using AI as mandated and not increasing the productivity as mandated.
Play it dumb, protect yourself from "if it's not working out then you are using it wrong" attacks.
by DisjointedHunt
1 subcomments
- This corporate crap makes me want to puke. It is a consequence of the forced bureaucracy from European regulations, particularly the EU AI act which is not well thought out and actively adds liability and risk to anyone on the continent touching AI including old school methods such as bank credit scoring systems.
- So general that it says nothing. Very corporate.
by Temporary_31337
0 subcomment
- blah, blah,people will simply use it as they see fit
by macleginn
3 subcomments
- ‘Sustainability: The use of AI must be assessed with the goal of mitigating environmental and social risks and enhancing CERN's positive impact in relation to society and the environment.’ [1]
‘CERN uses 1.3 terawatt hours of electricity annually. That’s enough power to fuel 300,000 homes for a year in the United Kingdom.’ [2]
I think AI is the least of their problems, seeing as they burn a lot of trees for the sake of largely impractical pure knowledge.
[1] https://home.web.cern.ch/news/official-news/knowledge-sharin...
[2] https://home.cern/science/engineering/powering-cern