Ibán García del Blanco, MEP: "If we win or lose the Artificial Intelligence race it will not be because of regulation"

INTERNATIONAL / By Carmen Gomaro

Ibán García del Blanco (León, 1977) was the only Spanish MEP in the 40-hour confinement that led two weeks ago to the agreement on the regulation of Artificial Intelligence. The specific details of the final text will be polished in the coming weeks, but the socialist politician defends that a more than good balance was achieved between maximalist positions: some said that it opens the door to massive technological racism and others that any regulation will leave us feet from China or the US.

They arrived at the room, at the last trilogues, with apparently irreconcilable positions.. There was a shared feeling among the MEPs, but also in the Council, that we necessarily had to reach an agreement, but we did not know very well how, because it is true that even in the previous meetings among ourselves there were differences on fundamental points and on The agenda had a good number of serious controversies without any progress. We didn't know how but we knew that we had to do everything necessary to not leave it hanging. And it turned out well. The legislative proposal came out years ago, it was languishing, but Chat GTP revolutionized everything. Parliament has been with this rule for five years in one way or another. With the first proposal of ethical principles applied to AI, reports from rapporteurs, the Special Committee on AI, own work with the Commission's proposal.. He was very, very mature and we thought that there was little left to scratch politically, it was time to make a move.. The Council knew that it had to be done in the Spanish Presidency, which has done the technical work, had the experts, a lot of effort with the capitals. With all this negotiation process, we all knew that it would be very difficult with another presidency, especially the informal part, which cannot be transmitted from one place to another.. We had the shared thought that it was now or never. And it is true that it was very difficult not only because of external pressure, from China or the US, but because the sector needs regulation as soon as possible.. The vacatio legis will be long, the implementation will last a few years and no longer could be expected. Everything has been very atypical in this regulation. It is substantial to the matter. The regulatory challenge that AI has is that this technology is in full evolution and growth, there is a dimension that cannot be identified, the epitome is the foundational models. When talking to experts last year and before, very expert experts, they told us that it was a development that we would see in four or five years. And suddenly it exploded and surprised us all.. So we had to adapt. With the foundational models, a tailor-made suit has been made, the general regulation is made on the use and the anticipation of risk, but in the foundational models it is more of a structure of technology by power, which has nothing to do with anything.. It has been complicated. It was the fifth trilogue and we have broken the EU duration record. And in the last phase, the largest governments in the Union change their mind and emerge as defenders of self-regulation. The paradox arises that those who reacted first with the explosion of the GTP Chats and with the most force have also been those who have modified. There is a fallacious dichotomy about regulation and development, as if they were contradictory. Both China and the US are at the forefront of this technology without Europe having a single standard to regulate.. Leadership has to do with investment, medium and long-term strategic approach. In Europe, cooperation between all Member States is essential. We have taken many precautions so that regulation does not create any type of barrier to the development of the models.. And we have studied case by case, differentiating between systemic risks, where regulation must be very strict, and models that are in full growth and do not apply or generate the same risks.. But I would say more. If one takes a look at the Chinese regulations, it is very similar in what affects the founding models to what the Parliament proposed, which has finally, in general terms, been approved.. Biden's executive order is even more ambitious in some ways. The question of whether we win or lose the AI race will have nothing to do with regulation, which is a sine qua non for respect for the rule of law.. What is the result of the agreement? Both sides were trying to safeguard public interests from different angles. Behind the search to relax prohibitions or requirements on the use of technology is the need for states and governments to anticipate crimes, identify victims, and national security issues.. It can be understood, but I think that Parliament very correctly opted for fundamental and individual rights as the ultimate touchstone.. And we have reached a balance. What was intolerable, the use of these technologies, and even more so when it is from public power, the red lines have not been crossed.. The final rule is much more similar to what the Parliament proposed than the Council. National security is a matter for the States but it could not have that inflationary tone as the Council sometimes said. Parliament wanted to frame what corresponds and what fits into national security, it cannot be a mixed bag. And when we have accepted exceptions, it is always with a prior analysis of the previous fundamental rights, judicial authorization, demands and guarantees.. Citizens can rest assured.
What is completely prohibited? subliminal manipulation. For example, classifying an entire population as having a 'weakness characteristic' and making segmented advertising that influences, especially minors or people with disabilities.. The use of emotion recognition in work or education is prohibited, because it was intended to control exams, for example.. It is prohibited to anticipate the commission of crimes, as if it were a dystopia like Minority Report. And remote biometric recognition by public authorities is generally prohibited, with some exceptions.. Because of that power, because of that intrusive capacity, its use will be prohibited.. We have generated certainty and security so that an operator can invest in good conditions in Europe and the citizen can rest assured, their rights remain safe.. How do you know that what was approved today will not be obsolete in six months, seeing the explosion that was Chart GTP? We cannot know, no one can, but we want to anticipate. We have some certainty that it seems unlikely that it will be in the near future. Furthermore, regulation is flexible and can be useful in any environment, because it is based on risk and technology.. Either something arises from the brutal and differential power that the founding models have had or it would be within the parameters. If that happens, we would have to meet again, we are aware that there will be trial and error.. And a permanent council is planned so that there is a dialogue with the administration and the sector, in defense of consumers.. And a European AI Office that should provide know-how on how it evolves. While finalizing the agreement, activists denounced its capitulation, the approval of technology that establishes racism, official big brother. It's understandable. In the face of a technology that almost entails a potential civilizational change in itself, I understand that the alarms go off and the approaches are inflationary, maximalist.. Each part plays a role, the technology sector practically said that if even half a standard were applied, in Europe we would be with sticks and stones against computers. Both positions are absolutely radical but necessary to generate a game of balance that helps us politically to find a virtuous point. What must be clear is that we have not placed shackles that will prevent sufficient technological development. There is a balance, also so that in case of real urgency, of compelling need, we do not have our hands tied behind our backs and we can react, for example to a terrorist danger. With all the safeguards, but it is possible.