“With these measures, we act strongly in the face of these challenges”. These are the words of Brando Benifei, Italian MEP who is the speaker of the Artificial Intelligence law of the European Parliament. And it is that the European Parliament is advancing in the regulation while its exact content is being negotiated. The EU wants to keep elements such as ChatGPT at bay, and assumes AI as one of the great challenges of the present and the future.
The Internal Market and Civil Liberties committees of the European Parliament agreed in Strasbourg this Thursday on their position on rules precisely to curb the negative effects that tools such as ChatGPT may have on the daily life of citizens, as well as their use, for example, in the generation of parallel reality or in the dissemination of false news. The pact was carried out with 84 votes in favor.
All in all, the objective of the plenary session of the European Parliament confirms its negotiating position for this artificial intelligence law next June and the Eurochamber will have to negotiate the final text of the norm later in a trialogue with the European Commission and the Council of the EU .
The MEP of the PP Javier Zarzalejos explains from Strasbourg to 20minutos that disinformation and AI “do not necessarily have to go hand in hand”, but it is important to know the risks. “AI is an element that will promote and promote disinformation, but it is an issue that has many nuances and it is important not to mix concepts,” he says.
In this sense, it also gives concrete examples. “You can recreate a surgical operation that serves and helps doctors or you can create an artificial reality that is indistinguishable from reality,” he adds.. And therefore he asks that the matter be explored in each case: “If we combine the possibilities that AI has in terms of natural language, the famous ChatGPT, plus the ability of AI to create images, situations or scenarios, we see that it is a truly brutal disinformation weapon and there we can have a very serious problem”.
The Eurocámara works so that the legislation is as grounded as possible and in this way avoid the gray areas that can occur in a sector that will continue to advance. For example, it considers a whole series of artificial intelligence systems with very specific uses that can only be placed on the market as high risk if they respect the fundamental rights and values of the EU. This is the case of those tools with which an electoral process can be influenced, mass surveillance systems or that help the dissemination of illegal content.
Likewise, the idea is that all those entities that do not respect the future law, which is expected to be approved before the end of the year, have to pay a fine of up to 7% of their global annual turnover.