As we prepare for the AI Act to take effect on August 1, 2024—20 days after its publication in the Official Journal on July 12, 2024—this blog explores how the AI Act interacts with the MDR 2017/745 and IVDR 2017/746. We will examine if the concerns about overlap and redundancy have been addressed and evaluate the effectiveness of these measures in reducing unnecessary burdens and duplication.
In April 2021, the first draft text of the AI Act was published. Now, over three years later, the regulation is set to govern AI systems in the European Union market. Back when the draft was released, the ‘AI & Medical Devices Working Group’ at NEN evaluated its potential impact and submitted our feedback and recommendations to the European Commission.
Our primary concerns for the medical device sector included:
Horizontal Nature: The broad scope of the regulation could lead to increased regulatory burdens due to overlapping requirements and varying expectations for implementation, such as through harmonized standards.
Definition of AI: The initial definition of an AI system was too broad and needed clarification.
High-Risk Definition: There was no clear definition of what qualifies as a high-risk AI system.
Conformity Assessment: Uncertainty about how conformity assessment processes would align with existing regulations like MDR 2017/745 and IVDR 2017/746, and how to minimize additional burdens on Notified Bodies and the healthcare industry.
Risk Management: Concerns about how differences in risk management approaches would be addressed.
Quality Management & Technical Documentation: Questions about how these requirements would be integrated.
Regulatory Oversight: Uncertainty regarding the management of regulatory oversight.
General-Purpose AI: This issue was not addressed in the initial draft, as it was not yet a relevant concern.
While the AI Act encompasses a vast amount of information, let’s move back to the original intent of this blog: the interaction between MDR 2017/745, IVDR 2017/746, and the AI Act; let’s examine if concerns were addressed and how effectively the regulation has reduced burdens and duplication from its initial draft.
Horizontal Nature
European institutions have made significant efforts to align regulations, leading to some positive outcomes. However, integrating AI requirements into existing vertical legislation for medical devices would have been more beneficial. This approach would have reduced confusion about harmonized standards, though it would have been challenging for legislators to amend multiple laws. An initiative from the European Parliament ((article 8, paragraph 2a) proposed incorporating AI Act requirements into the MDR and IVDR, but this proposal did not survive the trilogue discussions.
Why Does This Issue Matter?
Consider this example: The ISO 42001 Management System standard, developed mainly by non-regulated industries like big tech, is eagerly awaited but does not meet the AI Act’s requirements for quality, safety, or fundamental rights. It does not sufficiently address the concerns necessary for harmonization under the AI Act. On the other hand, ISO 13485, which is designed for medical devices, effectively manages risks related to data governance, much like it handles issues for sterile devices or calibration equipment. As a result, neither ISO 42001 nor ISO 13485 can demonstrate full compliance with the AI Act, and discussions on finding a solution are still ongoing within Europe’s JTC 21 committee, which is tasked with responding to the European Commission’s standardisation request from 2022.
With the European Commission’s deadline set for April 2025, time is running short, and no harmonized standard is currently available. If necessary, the European Commission can introduce European Common Specifications in the absence of standards. Differing interests of stakeholders—such as regulated industries versus non-regulated sectors and consumer groups versus big tech—complicate the process of reaching consensus on what the AI Act requires in terms of standardisation. It remains uncertain how standards for quality management, cybersecurity, and risk management will align with existing medical device standards.
Definition of Artificial Intelligence
The definition of Artificial Intelligence has been a topic of significant debate since the initial version of the AI Act. It evolved from specifying particular AI techniques to adopting a broader, more generalized definition.
Proposal April 2021
‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
Final Text May 2024
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
The inclusion of ‘inference’ in the definition clarifies the distinction between traditional rule-based software and modern AI techniques, which use inference to generate outputs from inputs.
However, some gray areas remain, and defining AI in a way that accommodates future advancements is challenging. For more insights on this complex definition, check out this reference article. The European Commission plans to release guidance on interpreting this definition later in 2024.
High-Risk Artificial Intelligence Systems
Regarding the regulation of medical devices under the AI Act, there have been minimal changes in determining their high-risk status. However, the European Commission has indicated that interpreting Article 6(1) of the AI Act may be more complex than initially anticipated.
Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;
(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.
Safety Components
The MDR and IVDR do not cover the concept of Safety Components as described in Article 6(1) of the AI Act. However, the AI Act’s definition of a safety component (Article 3(14)) could apply to components of medical devices. This includes AI algorithms designed to monitor device behavior (e.g., a system that oversees a surgical robot), detect potential security breaches (e.g., identifying hacking attempts in software-based medical devices), or prevent radiation overdoses (e.g., controlling radiation levels in CT scans).
‘safety component’ means a component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property;
While these AI systems might not directly fulfill a medical device’s intended purpose or benefit, they could still be classified as a ‘safety component’.
AI Systems as Products
This leads us to the second aspect of High-Risk classification. If not all medical devices with AI are automatically classified as High-Risk, the concept of ‘safety component’ may not apply universally. This raises the question of how to interpret the notion of an ‘AI System itself being a product.’ Without specific guidance, the interpretation remains unclear. Manufacturers should consider the following two questions:
Does the AI in my medical device contribute to the device meeting its intended purpose? (would it still perform this ‘medical’ diagnostics / therapeutic intended purpose without the AI); and;
If the answer to question 1 is answered with ‘No’, does the AI System qualify as a component that is relevant to the safe and secure operation of the medical device (i.e. a safety component)?
If both questions are answered with ‘no,’ the medical device is in the view of the author unlikely to be classified as High-Risk under the AI Act. The European Commission plans to provide detailed guidance on this topic at the start of 2025.
Conformity Assessment
The AI Act has seen significant revisions, especially in clarifying its relationship with the Medical Device and In-Vitro Diagnostic Regulations.Initially, there was uncertainty about whether manufacturers would need duplicate conformity assessments. However, many of these concerns have been addressed, leading to a more positive outlook.
Medical devices and in-vitro diagnostic devices will continue to undergo conformity assessment according to their existing procedures. Notified Bodies will be evaluated by Competent Authorities for their expertise in Artificial Intelligence before they can issue CE certificates. In addition to the standard requirements of the Medical Device and In-Vitro Diagnostic Regulations, these bodies will need to audit the additional aspects introduced by the AI Act, such as data governance, human oversight, logging, and fundamental rights risks.
Manufacturers should be aware that not all Notified Bodies may develop the necessary expertise to audit against the AI Act, potentially requiring manufacturers to switch to a different Notified Body. Additionally, Notified Bodies will need to address specific points from Annex VII, including sections 4.3., 4.4., 4.5., and the fifth paragraph of 4.6. This means manufacturers might have to provide training, validation, and testing datasets, as well as their AI models, for evaluation. Notified Bodies are working on enhancing their capabilities to test AI systems in-house, as discussed at the recent Medtech Summit in Brussels. Manufacturers should consider this when drafting contracts with data providers, as datasets may need to be shared with authorities.
Risk Management
The AI Act has significantly clarified its approach to risk management since its initial draft. Unlike the international standards community (SC 42), which focuses on organizational risks (as seen in ISO 31000), the AI Act emphasizes addressing safety concerns. It draws on definitions from ISO Guide 51, the basis for ISO 14971. The AI Act allows manufacturers to integrate risk management activities into their existing frameworks, but compliance with ISO 14971 alone is not enough. Manufacturers must also consider fundamental rights, such as equal treatment and data protection. This is distinct from the Fundamental Rights Impact Assessments (FRIAs) required for High-Risk AI Systems listed in Annex III.
Quality Management & Technical Documentation
Much like Risk Management, Quality Management System and Technical Documentation requirements can be integrated into existing systems and documentation. This integration helps reduce the duplication of efforts needed to comply with the AI Act. However, effectively implementing Quality Management requirements for AI Act compliance will necessitate the harmonization of a Quality Management System standard.
Manufacturers should closely monitor developments in standardization and the requirements set by Notified Bodies. The MDCG is reportedly working on a guidance document to clarify how the AI Act interacts with the MDR and IVDR. This document is anticipated to provide practical guidance on Quality Management and Technical Documentation in the absence of additional standards.
Regulatory Oversight
After a medical device enters the European market, it remains under the oversight of Competent Authorities. Member states may choose to establish additional governance bodies to oversee the use of Artificial Intelligence within these devices.
General-Purpose AI
It’s remarkable to think that when the first AI Act proposal was published, ChatGPT hadn’t yet launched—ChatGPT only became available in November 2022. The rise of such technologies has significantly influenced discussions about the AI Act, leading to the inclusion of additional chapters specifically addressing general-purpose AI. The impact of these tools on our society is profound, and their rapid evolution has driven important updates in regulatory frameworks.
An interesting aspect of the AI Act is its intention to regulate General-Purpose AI (GPAI) models that exhibit significant versatility and can be applied to various purposes. However, if a GPAI system is used for a specific, narrow purpose—such as in healthcare but not for a medical intended purpose—it might not qualify as a GPAI model or system and could fall outside the scope of the definition. Additionally, aside from transparency requirements, the regulations for General-Purpose AI Systems are quite limited.
‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market;
‘general-purpose AI system’ means an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;
Wrapping Up
Reflecting on the consultation document and the concerns raised about the initial AI Act proposal, many improvements have been made to align the AI Act with the Medical Device Regulation and the In-Vitro Medical Device Regulation. Duplication of requirements has been minimized, allowing risk management, quality management, technical documentation, and labeling to be consolidated into a single file, addressing only the specific AI-related differences. Additionally, using the existing conformity assessment procedures from the Medical Device and In-Vitro Device Regulations to demonstrate AI Act compliance has significantly reduced the overall burden.
Does this mean the effort is minimal and all concerns are resolved? Absolutely not. Manufacturers must continue to closely monitor standardization activities and guidance, as many implementation questions remain unanswered and will need to be addressed through standards and guidance. Additionally, the transition period for High-Risk AI-enabled medical devices is only three years—a relatively short timeframe. Notified Bodies will need to enhance their expertise, while manufacturers must update their technical documentation, quality systems, and contractual agreements to meet the AI Act’s requirements.
Notified Bodies will need to evaluate their compliance once manufacturers update their Declarations of Conformity to reflect adherence to the AI Act. Additionally, we anticipate that the time, resources, and costs associated with complying with the AI Act will increase. Some Notified Bodies have already begun charging nearly €10,000 per day for an expert to review AI-related aspects of medical devices.