The European Parliament approved a draft regulation on 14 June. The text, which must now be negotiated with the member states, will not be adopted before 2026. However, it is already provoking heated debate between those who fear that Europe is holding back innovation and those who warn of the risks associated with the use of artificial intelligence. Here’s an update from AURIS Finance experts, a consultancy specialising in mergers and acquisitions.
Sam Altman has become an insomniac. The CEO of OpenIA, the creator of ChatGPT, an artificial intelligence conversational robot, has expressed doubts about his invention.
Super-powered digital brains
He fears that “artificial intelligence is threatening the world” and urges regulators to get to grips with the issue: “There will undoubtedly be people who will not apply the limits we set ourselves. As a society, we have a very limited time to manage and regulate this phenomenon”. The two major risks identified by the expert include large-scale disinformation and the massification of cyber-attacks. Sam Altman is far from the only person in the world to be concerned about the large-scale deployment of AI. In March 2023, Elon Musk, along with a thousand other industry experts, wrote an open letter calling for a ‘pause’ in the development of artificial intelligence. This request was motivated by the need to regulate this software, which was deemed “dangerous to humanity”.
Europe at the forefront in terms of legislation
Europe has been addressing the issue of AI regulation since April 2021, long before the arrival of ChatGPT. The text adopted in mid-June is now on its way to member states for possible implementation in 2026. It sets out a risk pyramid with four levels. At the bottom are uses that do not require special monitoring. At the fourth level are so-called unacceptable risks, such as facial recognition databases. This is a hotly debated issue, as the Commission wants to allow the use of automated facial recognition systems by law enforcement agencies in the fight against crime and terrorism. Levels two and three cover limited risk and high-risk technology. “There will be mandatory declarations required from all the developers of these technologies so that anyone who has access to them knows that they are really artificial intelligence and that they will not be misused to falsify reality or deceive those who observe them,” explains Geoffroy Didier, vice-chairman of the European Parliament’s committee on artificial intelligence, in a France Info interview. For example, photos generated by an AI will have to be labelled as such.
What about European innovation?
This is the first regulation in the world to oversee the use of generative artificial intelligence. But if Europe is the only one to tackle the issue, it risks stifling innovation at home. “We need to regulate, yes, but we also need to ensure that we have companies that master these technologies,” Cédric O, former Secretary of State for the Digital Economy, told France Info in mid-June. In France, many companies are currently working on generative intelligence software. In March 2023, LightOn launched Paradigm, a generative AI platform designed for large companies. With features very similar to ChatGPT, the platform can be deployed directly on corporate IT infrastructures. Users are promised total control over their data.
Get the support you need
Beyond the deployment of the technology, it is its use that is now raising questions. As with the Internet, all businesses, regardless of sector, may soon be using AI on a daily basis. In mergers and acquisitions, the question of the use of this technology and compliance with regulations is an integral part of the due diligence phase. AURIS Finance’s experts are specialised in different sectors and will support you throughout your sale or acquisition transactions.