Leaders of technology giants such as OpenAI, Meta and X are dictating the debate on the regulation of artificial intelligence. Growing influence worries researchers and activists. While artificial intelligence (AI) transforms industries and society itself, politicians are racking their brains over how to regulate it.
CEOs of the world’s leading technology companies have emerged as leading voices in this debate, presenting their views on the potential benefits and risks of AI.
Researchers and activists, however, have been concerned about the sector’s growing influence in the conversation. They point to an exasperating dominance of American companies, raising questions about the underrepresentation of other regions of the world, especially the Global South. They also fear that this situation could overshadow critical issues such as breaches of privacy and labor protection.
“We have seen these companies guide the debate very skillfully,” Gina Neff, executive director of the Minderoo Center for Technology and Democracy at the University of Cambridge, tells DW.
See below who the most influential voices in the AI regulation debate are and what they advocate.
Elon Musk, the prophet of doom
No businessman has been more vocal about the potential risks of artificial intelligence than Elon Musk, the billionaire who runs several corporations in the technology sector, including the new venture xAI.
For years, Musk has been warning about AI’s potentially catastrophic impacts on civilization. In 2018, he referred to the technology as “far more dangerous than nuclear weapons.” During a conversation with British Prime Minister Rishi Sunak this month, the businessman repeated warnings that AI could be “the most disruptive force in history” and urged regulators to act as “arbiters.”
But Musk also warned against excessive control, telling Sunak that governments should avoid “stepping in with regulations that inhibit the upside” of technology.
For Daniel Leufer, senior analyst at Access Now, a digital rights group based in Brussels, by highlighting these existential risks, Musk continues to divert attention from urgent technological concerns, such as protecting personal data and ensuring fair AI systems.
“It’s diverting attention from the technology we’re dealing with right now to things that are very speculative and often in the realm of science fiction,” Leufer tells DW.
Sam Altman, the Regulator Whisperer
In November 2022, San Francisco-based OpenAI launched ChatGPT, becoming the first company to make a large-scale generative AI system publicly available. Since then, the company’s CEO, Sam Altman, has embarked on a tour to discuss AI regulation, including with lawmakers around the world – from Washington to Brussels.
This propelled Altman to the forefront of the debate. At meetings, the entrepreneur said that high-risk AI applications could cause “significant harm to the world” and needed to be regulated, only to then offer OpenAI’s expertise to guide policymakers through the complexities of cutting-edge AI systems.
“Basically, he’s saying, ‘Don’t trust our competitors, don’t trust yourselves, trust us to do this work,'” explains Neff, the Cambridge professor. “It’s brilliant corporate communication.”
Still, Neff warns that OpenAI’s approach, while effective from the company’s best interests, may not adequately reflect the diversity of voices in society. “We’re asking for more accountability and democratic participation in these decisions, and that’s not what we’re hearing from Altman.”
Mark Zuckerberg, the silent giant
The CEO of Meta, another leading AI development company, has been notably quiet in the debate. Addressing US congressmen in September, Mark Zuckerberg advocated joint collaboration between policymakers, academics, civil society and industry to “minimize the potential risks of this new technology, but also to maximize the potential benefits.”
Other than that, Zuckerberg appears to have delegated much of the regulatory discussion to his subordinates, such as the company’s president of global affairs, former British politician Nick Clegg.
On the sidelines of the recent UK AI summit, Clegg played down fears about AI risks that could threaten human survival – emphasizing instead more immediate threats such as the risk of undue interference in UK and US elections, scheduled for next year – and advocated the search for short-term solutions to issues such as detecting AI-generated online content.
Dario Amodei, the new kid on the block
Founded in 2021 by former OpenAI employees, Anthropic, a security AI company, quickly attracted substantial investment, including a possible $4 billion from technology giant Amazon.
And despite Anthropic’s brief trajectory, its CEO, Dario Amodei, has already earned a place in the debate on AI regulation.
In a recent speech to lawmakers at the AI Security summit in Bletchley Park, Amodei said the dangers associated with current AI systems may be relatively limited, but “are likely to become very serious at some unknown time in the near future” – a statement was released by the company itself.
To address these imminent threats, Amodei presented lawmakers with a methodology developed by Anthropic: an approach that categorizes AI systems based on potential security risks to users. A similar methodology, according to him, could serve as a “prototype” for drafting bills to regulate technology.
Leufer, from the NGO Access Now, warns against any excessive dependence on corporate actors like Anthropic to develop public policies: although their contributions to the debate are necessary and useful, legislators must maintain their independence. “They certainly shouldn’t be the ones making the rules,” she says. “We must be very careful about letting them set the agenda.”
Deutsche Welle is Germany’s international broadcaster and produces independent journalism in 30 languages.