The EU AI Act could pose significant challenges for entrepreneurs
02.08.2024 / Articles / Intellectual Property
On July 12, 2024, the Official Journal of the European Union published the Regulation of June 13, 2024, laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139, and (EU) 2019/2114, as well as Directives 2014/90/EU, (EU) 2016/797, and (EU) 2020/1828 (Artificial Intelligence Act). The EU AI Act is the first comprehensive set of regulations in the world governing artificial intelligence. Its aim is to establish a uniform legal framework, particularly in terms of the development, marketing, deployment, and use of AI systems.
What are AI systems?
An AI system is a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. . Consequently, it encompasses any system that utilizes artificial intelligence solutions in any way.
Who is affected by the AI Act?
The AI Act introduces new obligations, particularly for the following entities:
- providers of AI systems,
- deployers of AI systems,
- importers and distributors of AI systems,
- manufacturers who place on the market or put into service an AI system under their trade name or trademark along with their product,
- authorized representatives of providers which are not established in the Union,
- individuals affected by AI.
Risk
The AI Act classifies AI systems according to the level of risk associated with their specific use. Assigning them to a specific group is crucial because it involves different regulatory requirements. The higher the risk level, the more obligations are imposed on the deployment and use of AI. Accordingly, the following types of risk are distinguished:
- Unacceptable risk: Includes the use of subliminal techniques, social scoring leading to detrimental or unfavorable treatment of certain individuals or groups, emotion recognition in the workplace and educational institutions (unless for medical or safety reasons), biometric categorization based on biometric data to determine race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation. The use of AI systems that pose an unacceptable risk is prohibited;
- High risk: Includes AI systems intended for use in recruitment or selection of individuals and for making decisions affecting terms of work-related relationships , the promotion or termination of work-related contractual relationships , task assignment based on individual behavior or personal traits, and for monitoring or evaluating the performance of individuals in such relationships. This may also apply to certain systems mentioned in areas such as biometrics, education and vocational training, access to essential private services and public services, and benefits. High-risk AI systems will be subject to strict regulations;
- Limited or minimal risk: Includes most applied AI systems. The operation of these systems will typically not be restricted or burdened with additional obligations. For some AI systems, transparency requirements have been introduced, for example, when there is a clear risk of manipulation. This risk includes systems like chatbots, spam filters, and AI-assisted applications.
What’s important?
- Competence – ensuring, to the greatest extent possible, a sufficient level of AI literacy among staff and other individuals involved in the operation and use of AI systems on their behalf, considering their technical knowledge, experience, education, and training, as well as the context in which AI systems are to be used;
- Informing – including:
- informing providers (as well as importers or distributors and relevant market surveillance authorities) in the event of serious incidents,
- informing individuals that a high-risk AI system is being used with regard to them,
- informing employee representatives and employees that they will be affected by a high-risk AI system in the workplace,
- informing that content (images, audio, or video content representing deepfakes) has been artificially generated or manipulated;
- Technical and organizational measures – including:
- taking appropriate technical and organizational measures by entities using high-risk AI systems to ensure that they use such systems following the attached instructions,
- implementing adequate and effective cybersecurity measures to protect the security and confidentiality of obtained information and data, and removing accumulated data,
- implementing procedures for testing, examining, and validating,
- conducting investigations into serious incidents and AI systems,
- providing appropriate conditions for processing special categories of personal data;
- Cooperation with relevant authorities.
Penalties
Failure to comply with the obligations imposed by the AI Act by relevant entities will result in financial liability. These penalties can amount to:
- up to EUR 35,000,000 or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover in the case of non-compliance with the prohibition of unacceptable practices,
- up to EUR 15,000,000 or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover in case of violations of obligations under the AI Act.
The penalties must be effective, proportionate, and dissuasive.
When?
The regulation will be fully applicable two years from the date of its entry into force, with some provisions taking effect earlier. Prohibitions related to certain uses of AI will apply after six months, and provisions such as obligations for general-purpose AI model providers and some penalty provisions will be implemented after one year. A longer period (36 months from the date of entry into force) applies to the rules for classifying high-risk AI systems and the corresponding obligations.
How to prepare?
The EU AI Act introduces new obligations particularly for entities developing AI systems or commissioning their development, as well as those using, importing, or distributing artificial intelligence. To adequately prepare for the obligations arising from the AI Act, we propose:
- Determining which AI systems are or will be in use and their classification;
- Verifying whether the AI systems in use meet the appropriate standards and are used for suitable purposes;
- Adapting the used systems to the obligations associated with AI systems of a specified risk level;
- Assessing the risks associated with the use of AI, especially in high-risk systems;
- Preparing new and evaluating existing procedures for compliance with the AI Act, including introducing monitoring and audit mechanisms for AI systems;
- Training employees to ensure an appropriate level of competence among staff,
- Preparing the necessary documentation;
- Developing a plan for monitoring systems and storing logs.
Publication author