Why did the EU block the launch of Bard, Google's AI?

Google had to postpone the launch of Bard, conversational artificial intelligence, in the European Union. Indeed, the Irish Data Protection Commission (DPC), the EU’s main data watchdog, has blocked the procedure for confidentiality reasons. She explains in a report that the company has not explained how it plans to comply with the current data protection rules. This decision delays the adoption of Bard in Europe, but also shows the strict nature of AI regulations in the Union.

The causes of the suspension of the launch of Bard

Since launching Bard in March, Google has successfully rolled out the tool in the US, UK and 178 other countries and territories. However, the Californian company is encountering difficulties in the European Union.

DPC Deputy Commissioner Graham Doyle explains in a report that Google only recently notified the Commission of its intention to launch Bard in the EU this week. He also claims that the American company provided neither “a detailed briefing or sight of a data protection impact assessment or any supporting documentation“.

Concretely, Google has so far not provided sufficient explanation on the matter of which its generative AI tool protects the privacy of Europeans. The European regulator therefore had no choice but to delay its launch. In addition, the DPC requires the company to promptly provide a detailed assessment and answer additional questions on the matter.

The Union’s strict approach to AI regulations

This is not the first time that the launch of an AI tool has been halted in the EU over privacy concerns. We remember that ChatGPT, the Open AI tool, suffered the same setbacks with European regulators. Its use had even been banned in Italy!

Eventually, the ban was lifted after informed debates about data privacy. In turn, Google finds itself in a similar situation with the DPC. The suspension of the launch of Bard IA This case clearly shows that the European Union’s approach to AI regulation is much stricter than in other countries.

The pressure on AI technologies in the Union stems from the AI ​​law tabled in May 2023. This law provides a regulatory framework for artificial intelligence. It attempts to align the management of AI technologies with the General Data Protection Regulation.

The EU AI law emphasizes the security and privacy of European users as well as the responsibility of companies operating in space. Its requirements are different or even higher than the provisions set in the United Kingdom or the United States.

The impact on generative AI companies

While it is important to ensure that AI technologies deployed in the EU comply with the regulations, there are nevertheless questions to be asked about the impact of these provisions on generative AI companies and, in a extent, on the adoption of AI.

Open AI CEO Sam Altman, for example, said in May that his company could leave the European bloc if it became too difficult to comply with proposed artificial intelligence laws. In particular, he fears that these will force generative AI companies to reveal the materials used to develop their large linguistic models.

Receive a digest of news in the world of cryptocurrencies by subscribing to our new service of newsletter daily and weekly so you don’t miss any of the essential Tremplin.io!

Similar Posts