EU ACT ACT: Last Code Project for IA model manufacturers

EU ACT ACT: Last Code Project for IA model manufacturers

Before a deadline of May to lock the advice for suppliers of general AI models (GPAI) on the conformity of the provisions of the I have the act which apply to a large AI, a third draft The code of practice was published on Tuesday. The code has been in formulation since last yearAnd this project should be the last revision round before the directives are finalized in the coming months.

A website has also been launched to stimulate the accessibility of the code. The comments written on the last project must be submitted by March 30, 2025.

The Bloc Raisse -based Rules Book for AI includes a subset of bonds that only apply to manufacturers of the most powerful AI models – covering areas such as transparency, copyright and attenuation of risks. The code aims to help manufacturers of GPAI models to understand how to comply with legal obligations and to avoid the risk of non-compliance sanctions. IA ACT sanctions for violation of GPAI requirements, in particular, could reach up to 3% of world annual turnover.

Rationalized

The latest revision of the code is billed as having “a more rationalized structure with commitments and refined measures” compared to previous iterations, based on the comments on the second draft published in December.

Additional comments, group discussions and workshops will feed the process of transforming the third project into final directives. And the experts say they hope to achieve greater “clarity and consistency” in the final version of the code.

The project is broken down into a handful of sections covering commitments for GPAIS, as well as detailed advice for transparency and copyright measures. There is also a section on safety and security obligations that apply to the most powerful models (with a so -called systemic risk, or GPAISR).

In transparency, directives include an example of a GPAIS model documentation form could be filled in order to ensure that deployers downstream of their technology have access to key information to help their own compliance.

Elsewhere, the copyright section is probably the most immediately controversial area for a large AI.

The current project is fulfilled with terms such as “best efforts”, “reasonable measures” and “appropriate measures” when it comes to complying with commitments such as compliance with rights requirements when you crawl the web to acquire data for model training or mitigating the risk of models producing results in terms of copyright credits.

The use of such a publicized language suggests that data exploration AI giants may think that they have a lot of room for maneuver to continue to enter protected information to train their models and Ask for forgiveness later – But it remains to be seen if the language is hardened in the final project of the code.

The language used in anterior iteration of the code – stating that GPAIS should provide a single point of contact and complaint management to facilitate the grievances of rights to communicate the grievances “directly and quickly” – seems to have disappeared. Now there is simply a line indicating: “The signatories will designate a point of contact for communication with the rights allocated and will provide information easily accessible on this subject.”

The current text also suggests that GPAIS could be able to refuse to act on copyright complaints by right -handers if “manifestly unfounded or excessive, in particular because of their repetitive nature”. He suggests that creative attempts to return the scales using AI tools to try to detect copyright problems and automate complaints of deposit against Big IA could cause them … simply being ignored.

With regard to safety and security, EU ACT requirements to assess and mitigate systemic risks already apply to a subset of the most powerful models (those formed using A total computing power of more than 10 ^ 25 flops) – but this last project sees certain measures previously recommended due to the response in response to feedback.

American pressure

Without mention in the EU press release About the last project are praised attacks against European legislation in general, and the blocks Rules for AI specificallyLeaving the US administration led by President Donald Trump.

At the top of AI of Paris last monthUS vice -president JD Vance rejected the need to regulate to ensure that AI is applied security – Trump administration would rather be supported by “the opportunity of the AI”. And he warned Europe that surregulation could kill the golden goose.

Since then, the block has moved to kill an AI security initiative – by putting the IA responsibility directive on blocking. EU legislators have also followed an “omnibus” set that comes from the simplification of the reforms of existing rules which, according to them, aim to reduce the administrative formalities and the bureaucracy for business, by emphasizing areas such as reports on sustainability. But with the AI ​​law still being implemented, there is clearly a pressure applied to the diluted requirements.

At the mobile world congress trade show in Barcelona Earlier this monthManufacturer of French GPAI Mistral models – A particularly noisy opponent of the EU AI law During negotiations to conclude the legislation in 2023 – with the founder Arthur Mensh said that he had difficulty finding technological solutions to comply with some of the rules. He added that the company “works with regulators to ensure that this is resolved”.

Although this GPAI code is written by independent experts, the European Commission – via the AI ​​office which oversees the application and other activities related to the law – is, in parallel, to produce “clarifying” directives which will also shape the way in which the law applies. Including GPAIS definitions and their responsibilities.

So support new advice, “in due course”, of the AI ​​office – which, according to the Commission, “will clarify … the scope of the rules” – because this could offer a way to the legislators losing for the nerves to respond to American lobbying to deregulate AI.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *