AI companies are moving to user interface innovations to try to grab more unwilling training individuals. These new UI innovations will feel like shit if they try to force adoption, full of warnings and disclaimers.
Reminiscent of the cookie law, which many people hate, but they hate because companies insist in having cookies (if you don't track, you don't need the cookie popup).
Privacy, safety and reliability debates are back into dark patterns awareness. This is a territory tech companies were trying very desperately to get out of.
I think it's also brilliant in the way it answers the black box paradigm. "Oh, we cannot explain it, it's a black box". "Then explain how you made it, otherwise it's a no go".
Ultimately, this sets the discourse straight regarding what AI skepticism is all about. This is not about being anti-commerce, it's about being good commercial entities.
The AI vendors will NEVER fix any system flaws that can be ignored or hidden. Only a public database can force these into the open.
To me, "ask" connotes that compliance is voluntary. Which in some circumstances strikes me as an intentional, rhetorical lie.
Where YOU live you can have all the unbridled capitalism as you want - be a product for tech bros and help make some executive a billionaire - I don't care!
Where I live, I want this shit regulated. So, good stuff, EU.