As the European Union edges closer to passing a wide-ranging set of laws governing artificial intelligence, politicians and experts say the surprise ousting of OpenAI CEO Sam Altman underscores the need for strict rules.
Altman, co-founder of the startup that last year kicked off the generative AI boom, was abruptly fired by OpenAI's board last week.
The sacking sent shockwaves through the tech world and prompted employees to make threats of a mass resignation at the company.
Across the Atlantic, the European Commission, the European Parliament and the EU Council have been hashing out the fine print of the AI Act, a sweeping set of laws that would require some companies to complete extensive risk assessments and make data available to regulators.
In recent weeks, talks have hit stumbling blocks over the extent to which companies should be allowed to self-regulate.
Brando Benifei, one of two European Parliament lawmakers leading negotiations on the laws, told Reuters: "The understandable drama around Altman being sacked from OpenAI and now joining Microsoft shows us that we cannot rely on voluntary agreements brokered by visionary leaders" he said.
"Regulation, especially when dealing with the most powerful AI models, needs to be sound, transparent and enforceable to protect our society," he added.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
Reuters reported yesterday that France, Germany and Italy had reached an agreement on how AI should be regulated, a move expected to accelerate negotiations at the European level.
The three governments support "mandatory self-regulation through codes of conduct" for those using generative AI models, but some experts said this would not be enough.
Alexandra van Huffelen, Dutch minister for digitalisation, told Reuters the OpenAI saga underscored the need for strict rules.
"The lack of transparency and the dependence on a few influential companies in my opinion clearly underlines the necessity of regulation," she stated.
Meanwhile, Gary Marcus, an AI expert at New York University, wrote on social media platform X: "We can’t really trust the companies to self-regulate AI where even their own internal governance can be deeply conflicted.
"Please don't gut the EU AI Act; we need it now more than ever."