More and more machines, sites or electrical equipment make use of self-learning algorithms to increase the usability for customers and users. In relation to product safety this represents a novelty: the behaviour of products does no longer necessarily possess a certain functional range when being placed on the market. Due to self-learning software components this functional range can face modifications or upgrades.
To face these challenges the EU Commission has submitted a legislative proposal for a concept dealing with Artificial Intelligence (AI) on a European scale. This proposal promotes the use of AI and intends to take all risks into account that might emerge together with AI systems. At the end of this legislative initiative there shall be a regulation with harmonized specifications. These rules are supposed to regulate the development, placing on the market as well as the application of such AI systems in the EU.
“On Artificial Intelligence, trust is a must, not a nice to have. With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted “, states Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age.
In recent years AI systems have undergone a rapid technical development. The EU Commission has made a call for action to launch a harmonized European regulation to counter the challenges caused by AI systems. This involves a common European framework that considers positive aspects as well as possible risks of such systems. The targets are to protect both fundamental rights as well as users to establish a legal basis for the rapidly developing field of AI.
With the new regulation, the legislator is taking a risk-based approach. This states that "the nature and content of such rules shall be tailored to the intensity and scale of the risks that AI systems may pose". This means that AI systems that pose an unacceptable risk to human safety would be strictly prohibited. This includes systems that use subliminal or intentionally manipulative techniques, exploit human vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status or personal characteristics)
AI systems with an unacceptable risk are therefore banned as they pose a safety threat to users. If a system poses a high risk, strict requirements must be met before it can be authorised on the market. In the case of a low risk, the regulation merely points out possible dangers; in the case of a minimal risk, there should be as little interference as possible in the free use of such systems. In the case of a specific transparency risk, users must be informed if biometric categorisation or emotion recognition systems are used.
In the new Comprmiss proposal of May 2023, MEPs expanded the classification of high-risk areas to include hazards to health, safety, fundamental rights or the environment. They also added AI systems used to influence voters in political campaigns and in recommendation systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the list of high-risk areas.
Companies that do not comply with the regulations will face fines. The fines amount to EUR 35 million or 7 % of annual global turnover (whichever is higher) for violations of prohibited AI applications, EUR 15 million or 3 % for violations of other obligations and EUR 7.5 million or 1 % for providing false information. For SMEs and start-ups, more proportionate upper limits for fines are provided for offences against the AI Act.
The AI Act introduces special rules for general purpose AI models to ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations in terms of risk management and monitoring of serious incidents. These new obligations will be operationalised through codes of conduct developed by industry, academia, civil society and other stakeholders together with the Commission.
The future AI Regulation and Machinery Regulation 2023/1230 are intended to complement each other. The AI Regulation primarily covers safety risks emanating from AI systems that control safety functions of a machine. As a complement, the Machinery Regulation is intended to ensure the integration of the AI system into the overall machine so as not to jeopardize machine safety as a whole.
Furthermore, the European Commission underlines that manufacturers must conduct only one declaration of conformity for both regulations.
The Regulation provides for harmonised standards to play a key role in compliance with the provisions of the AI Regulation. According to Article 40 (1), harmonised standards are to be developed specifically for high-risk systems or general purpose AI models, which will subsequently be published as references in the EU Official Journal. Once the EU Commission has issued a standardisation mandate to the standardisation organisations, further details for such future technical standards should already be available.
The following European standards, which are expected to be used to meet the requirements of the AI Act, have been published to date:
An initial attempt to integrate the limits of mechanical learning into machinery is the recently published ISO/TR 22100-5:2021-01 „Safety of machinery - Relationship with ISO 12100 - Part 5: Implications of artificial intelligence machine learning “.
The regulation was published in the Official Journal of the EU on 12 July 2024 and will enter into force 20 days later on 1 August 2024. It will become binding 2 years after publication on 2 August 2026, although there will be exceptions for individual provisions:
For a detailed overview of the various implementation dates, please refer to the timeline of the artificial intelligence act.
You can open and download the full text of the AI Regulation via the following link, and the full text of the legislation can also be found as a bibliographic data set in the Safexpert StandardsManager:
Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence
Posted on: 2024-10-16 (Last amendment)
Daniel Zacek-Gebele, MSc Product manager at IBF for additional products and data manager for updating standards data on the Safexpert Live Server. Studied economics in Passau (BSc) and Stuttgart (MSc), specialising in International Business and Economics. Email: daniel.zacek-gebele@ibf-solutions.com | www.ibf-solutions.com
You are not yet registered? Register now for the free CE-InfoService and receive information by email when new technical articles, important standards publications or other news from the field of machinery and electrical equipment safety or product compliance are available.
Register
CE software for systematic and professional safety engineering
Practical seminars on aspects of risk assessment and ce-marking
With the CE InfoService you stay informed about important developments in the field of product safety.