EU Commission Final Report on AI in Healthcare: Deployment, Barriers, and Regulatory Priorities

Overview of the EU AI in Healthcare Study

In August 2025, the European Commission released a final report on the deployment of AI in healthcare. The study highlights AI’s potential to address Europe’s healthcare challenges, including aging populations, chronic diseases, and staff shortages, by enhancing efficiency and improving diagnosis and treatment pathways. (Read the report here)

However, current AI adoption in clinical practice remains slow, despite many AI tools being available. The report identifies four broad categories of barriers hindering AI integration: technological and data-related issues, legal and regulatory complexities, organizational and business challenges, and social and cultural barriers. Importantly, it also showcases “successful strategies (accelerators) employed by hospitals globally” to overcome these obstacles, offering inspiration for EU healthcare systems. The EU is described as being in a unique position to scale up AI safely, ethically, and equitably, balancing innovation with patient rights.

These findings also reinforce the need for robust data governance under the upcoming EU rules. For a detailed look at how data quality, documentation, and traceability requirements will apply to medical AI systems, read our blog on Data Requirements Under the EU AI Act and visit our home page to download our FREE AI eBook to ensure AI Act Compliance!

The report presents a mixed picture of AI adoption across the EU. While some areas, such as medical imaging, radiology triage, and clinical decision support, have integrated AI tools into practice, many others remain at the pilot stage due to cost, interoperability, and regulatory barriers. Radiology is the most advanced: nearly half of European radiologists now use AI, though most tools are moderate-risk (Class IIa) devices rather than complex, adaptive systems.

Adoption remains concentrated in larger academic hospitals with stronger infrastructure, leaving smaller or rural institutions behind. Emerging applications such as AI-driven genomics, digital twins, and outbreak prediction are still experimental. The report concludes that although research activity is strong, real-world clinical deployment continues to lag and requires coordinated policy and investment to bridge the gap.

Overall, the research and development pipeline for AI in health is robust (e.g., patents and new algorithms are rapidly expanding), but clinical implementation lags; a gap the report urges stakeholders to close through coordinated efforts.

Key Barriers to AI Deployment

The study identifies four main categories of obstacles to AI integration in healthcare:

  • Technological & Data Challenges: Fragmented, non-standardised data and outdated hospital IT systems limit interoperability and slow AI implementation. “Black box” algorithms also undermine trust and validation. The report stresses the need for data standards such as HL7/FHIR and modernised digital infrastructure as a baseline for deployment.
  • Regulatory Complexity: AI in healthcare faces overlapping frameworks: MDR/IVDR, the forthcoming AI Act, and GDPR, along with differing national rules on consent and liability. This regulatory fragmentation creates uncertainty for both developers and healthcare providers. Clearer EU-level guidance and practical compliance support are urgently needed.
  • Organisational & Financial Barriers: Most Member States lack reimbursement pathways for AI tools. Without dedicated budgets, hospitals struggle to justify investment, especially smaller facilities. Limited real-world evidence compounds the funding challenge, leaving large academic centres as the primary adopters.
  • Social & Cultural Factors: Finally, the human element presents challenges. Healthcare professionals may be wary of AI systems – some clinicians fear being “second-guessed” or even replaced by algorithms. There are also ethical and trust concerns: both providers and patients worry about the reliability of AI’s decisions and the protection of sensitive medical data. Over half of patient organisations surveyed expressed doubts about AI reliability. Building confidence through transparency, explainability, and user training is essential to adoption.

Examples of Progress and Good Practices

Despite the barriers, the report highlights several encouraging developments across Europe. Some hospitals are overcoming data and interoperability challenges through shared standards such as FHIR, enabling smoother data exchange as seen in Belgium and Spain’s national AI data pilots.

At the policy level, Germany, France, and Belgium have introduced structured assessment pathways to evaluate and certify AI tools, paving the way for future reimbursement models. Other countries, including Finland, the Netherlands, and Spain, are developing national frameworks to support responsible AI innovation in healthcare.

Global examples reinforce these lessons. Johns Hopkins Hospital in the U.S. used AI to streamline patient flow and cut operating delays, while “digital scribe” tools have reduced clinicians’ documentation time. In Europe, validated AI symptom-checker apps show potential to expand access to basic triage support.

The study concludes that effective deployment depends on a mix of best practices: co-design with clinical users, establishment of AI assurance labs for post-market validation, and creation of EU-wide registries of trusted AI tools. Collaborative learning, both within and beyond Europe, is emerging as a key driver for safe and scalable AI adoption in healthcare.

Regulatory and Compliance Implications for Stakeholders

The report underscores that AI in healthcare now sits at the intersection of multiple EU regulations, with growing implications for manufacturers, authorised representatives, and healthcare providers.

The AI Act will classify most medical AI systems as high-risk, adding requirements for transparency, data quality, risk management, and human oversight. These obligations come in addition to MDR/IVDR compliance, reinforcing the need for integrated lifecycle documentation, post-market monitoring, and ongoing performance validation. While this creates short-term complexity, the report notes that a unified framework like the AI Act may ultimately streamline compliance across Member States.

Other initiatives will directly influence deployment. The European Health Data Space (EHDS) will improve data interoperability and access, but hospitals must upgrade IT systems and strengthen governance to comply. The revised Product Liability Directive (PLD) extends liability to adaptive AI systems, highlighting the need for strong post-market surveillance and traceability.

Manufacturers are encouraged to embed compliance early in design, focusing on bias mitigation, transparency, and human oversight. Hospitals, meanwhile, should prepare internal governance structures such as AI oversight or ethics committees to manage safe deployment. The overarching message: trust and compliance are now inseparable from innovation in medical AI.

Outlook: Towards Sustainable AI Integration

The report outlines a clear path for sustainable AI adoption in healthcare. It recommends establishing EU-wide monitoring frameworks to track AI deployment, certification rates, and readiness across Member States, along with Centers of Excellence to test, validate, and share best practices. These initiatives aim to harmonize standards, accelerate responsible innovation, and strengthen collaboration between hospitals, regulators, and developers.

For regulatory and quality professionals, the message is clear: success in AI-enabled healthcare will depend on robust compliance, interoperability, and post-market vigilance. Organizations that embed transparency, traceability, and continuous monitoring into their quality systems will be best positioned for the next regulatory phase.

To assess your organisation’s readiness for AI compliance under MDR, the AI Act, or EHDS, contact MedQAIR’s regulatory experts for tailored guidance on aligning your documentation, governance, and deployment workflows.

Explore the latest regulatory news and expert blogs from the MedQAIR team.

Latest Regulatory News

Book Our Free MDIS Demo

Discover how MDIS simplifies compliance and post-market coordination.

Unlock Your Quick Guide to AI Act Compliance!

Explore AI-enabled SaMD requirements with our easy step-by-step guide.

Get Your Free eBook

Cookies help us improve your experience on our website. By using our site, you consent to the use of cookies as described in this policy.