Written by Maria Moloney
I was in Cluj-Napoca, Romania last week where I gave a keynote speech on data protection across various industries. I attended a conference which launched a platform for start-ups entitled Techeconomics. Cluj-Napoca is at the centre of Romania’s tech industry, and it plays an important role in the development of the technology industry in Eastern Europe.
During my time in this beautiful city, I chaired a session in the conference, and I was delighted to have Kai Zenner give us an update on the AI Act.
Kai informed us that discussions are still very challenging. There are two groups that disagree on what should be included in the Act and what should be left out. This is worrying because if there is no agreement by the end of 2023, it will be much more challenging to pass the Act in the New Year when the EU elections take centre stage. Naturally, the focus will shift at that time from the discussion on the AI Act to the elections. Then, by the time the elections are over, the draft Act may be outdated and may never come to pass.
Kai outlined that the European Parliament draft introduced a set of principles into the Act, similar to those in the General Data Protection Regulation (GDPR) in an attempt to futureproof the Act. This is due to the fact that the entire AI industry is changing so much it is difficult to predict all possible scenarios where AI will be used. Providing a set of general AI principles that are broad enough to encompass as many scenarios as possible will increase the robustness of the Act.
In order to further future-proof the Act, Kai also mentioned the possibility of incorporating the idea of the EU Commission being allowed to review new scenarios even after the Act has been agreed upon to ensure that whatever situation that has not, thus far, been considered can be incorporated into the legislation on an ongoing basis. In fact, Kai argued that the world has already had three generations of AI since the AI Act was first considered.
Apparently, there are still four unresolved issues that need to be agreed upon:
Article 5 of the AI Act which looks at AI systems that should be fully banned in the EU. This is currently the most contentious issue. The list of banned systems remains undecided.
Article 6 of the AI Act has to do with High-risk AI systems. These are systems that are allowed in the EU but are classed as high-risk. The contention arises around the definition of what constitutes a high-risk system.
The third issue that is causing problems within the negotiations is the notion of central governance and enforcement. It is still not clear how governance of AI systems will be suitably centralised and enforced within the EU institutions.
Finally, the fourth issue that could cause the talks to break down has to do with foundation models or large language models. These models became popular after the draft of the European Commission was released and, as a result, the draft from the European Parliament tackled the issue. These systems, however, are changing so fast it is difficult to know how to regulate them. There is also a debate around whether these models are high risk systems or not. Again, the discussions are currently split.
Time is ticking for the much-awaited AI Act, and here is hoping that Kai and everyone involved will get the best deal possible by Christmas.
If you have any questions on the above article or wish to share an opinion piece please get in touch via the following Contact Form.