Europe is falling behind in the AI race. Rather than delaying innovation, the EU should set up an ‘advanced AI governance institution’ to build on the AI Act and solidify Europe’s place at the table as the AI industry grows.
Last month, Claude-2 – a new language model built to challenge GPT-4 – was released  in the US and UK, but not the EU. Unfortunately for Europeans, we’ll be late to the party. We’ll have to wait several months for technology that’s arguably cheaper, as powerful and safer than ChatGPT.
It’s a familiar story. Europeans only got access to Bard, Google’s new state-of-the-art language model, in mid-July after a 2-month delay  due to EU data protection rules. These delays bring a slowdown to European productivity and innovation that aren’t worth concerns over data privacy and expose an EU out of touch with far more critical AI issues. The EU should be more strategic about its AI regulation and not waste political capital on small-fry issues. Instead, it should focus on the real issues at hand and seize its opportunity to make the future of AI safer by founding an advanced AI governance organisation.
A few months might not sound like a long wait for new technology, but in the AI era, it’s a lifetime. Claude-2 has improved significantly since March. It can now match  GPT-4 (the model behind ChatGPT) on several benchmarks; its coding ability has drastically improved – it’s 4-5 times cheaper than GPT-4. Most importantly, it is now twice  as good at giving harmless responses to potentially dangerous prompts.
Even a 2-month delay in getting technology into the hands of European workers is setting the odds against them. Each day we wait, competition in the US and UK will have access to cheaper and safer technology. If Europeans are worried about their data, they can simply wait for a more secure model to be released – but this shouldn’t stop those who are less concerned with privacy from accessing the technology now. Europe is slowing itself down on AI and making the AI we use less safe in the process.
Europe is wasting political capital. Over the coming years, Europe will face pressing concerns with advanced AI. In the meantime, we are squabbling over comparatively insignificant issues.
Why EU AI Act will fail to protect europeans
A recent paper  by Google DeepMind and international contributors showed that “some advanced AI capabilities like automated software development, chemistry and synthetic biology research, and text and video generation — may be misused by malicious actors around the world with transnational consequences.”
The paper points out that “a number of challenges have the potential to transcend national borders and impact societies and economies worldwide”. Even if the EU AI Act succeeds in ensuring safe AI development within the EU, that won’t protect Europeans. The EU has a near non-existent  share of frontier AI labs and talent (in 2022 90%  of all authors of significant machine learning systems were from the UK, US and China). By accident or misuse, someone outside the EU could easily use advanced AI to hack into European assets or spread a novel pathogen into Europe. Europe won’t be able to protect itself with a purely insular policy agenda. We can’t expect every country to get AI regulation right on its own. If even one gets it wrong, there could be devastating consequences.
DeepMind’s paper lays the foundation for why international governance is necessary and what it could look like. In doing so, they lay out four distinct governance institutions that may need to form. Two of them, an ‘AI Safety Project’ akin to the “Manhattan Project” and a ‘Commission on Frontier’ AI to study the risks and opportunities presented by advanced AI, will require close proximity to existing frontier AI talent and labs, which make the US or UK Governments the obvious choice to lead these initiatives. However, we don’t want to end up with a UK/US-dominated AI governance world. That’s where the EU can step in.
The EU a strong candidate as ‘AI Governance Organization’
The EU is a strong contender to lead the third institution suggested: an intergovernmental or multi-stakeholder ‘Advanced AI Governance Organization’ to help internationalise and address global risks from advanced AI systems. The EU having no domestic AI industry is a strength here: it can appear as an impartial body with the international community’s interests in mind.
Europe has a vested interest in global AI governance and can accomplish this by channelling what it does best: making rules. Europe can utilise its wealth of experience governing and regulating one of the world’s largest markets to help set global standards. Improving and harmonising global AI regulation is a task that the EU would be well-suited towards that would drastically increase European safety and security.
This won’t be an easy path for the EU. Building a global governance organisation will require political savvy and compromise. This challenge will require thinking beyond just Europe, something the EU has failed to do in the past. The EU will almost certainly have to soften its stance on AI regulation to get the world on board. Most importantly, it will have to focus on the advanced AI systems that pose significant risks, which means ignoring ones with more benign harms.
If the EU manages to do all that, the upsides are tremendous. It could discover a lever over a technology that will shape the next generation – a lever it currently lacks. By shifting its focus from the non-issues of today to the pressing issues of tomorrow, it could guarantee a future where Europe’s interests aren’t at the mercy of American decisions. That all starts with turning a blind eye to minor concerns over current models like Claude-2, which are already cheaper, as powerful and probably safer than existing competitors.