On 3 December 2025, the European Commission released a 12-page draft Implementing Regulation (Ares(2025)10569703) establishing detailed rules for the operation of AI regulatory sandboxes under the landmark EU Artificial Intelligence Act.
This draft sets out a structured framework for supervised testing environments, allowing AI innovators, particularly in healthcare, to develop and validate AI systems under regulatory oversight before placing them on the market. The Commission has opened a public consultation on this draft, running until 30 December 2025, inviting feedback from all stakeholders to help shape the final regulation. You can contribute or find consultation details on the European Commission’s ‘AI regulatory sandboxes – rules for their set-up and operation’ page.
What Are AI Regulatory Sandboxes?
AI regulatory sandboxes are controlled environments where developers can test AI systems, often innovative or complex, under the watchful eye of regulators. This approach provides real-time guidance on compliance with the EU AI Act rather than waiting for after-market reviews. For healthcare AI, which often involves high-risk applications such as diagnostics or treatment recommendations, sandboxes help balance innovation with patient safety, data privacy, and regulatory requirements.
Key Provisions in the Draft Regulation
Eligibility and Access
- Limited to AI systems that are pre-market or undergoing significant modifications.
- Small and Medium Enterprises (SMEs), including startups, can participate free of charge to encourage innovation.
- Larger companies may be subject to cost-recovery fees to support sandbox operations.
Operational Framework
- Each participant must agree with competent authorities on a detailed sandbox plan outlining objectives, timelines, methodologies, and safeguards to mitigate risks.
- Authorities are required to document activities, provide written proof of participation, and issue exit reports capturing regulatory learnings.
- Projects can be suspended if safety or fundamental rights concerns arise during testing.
- Annual reports on sandbox activities will be published to ensure transparency and share regulatory insights across the EU.
Importantly, participation and exit documentation from these sandboxes do not equate to a formal Declaration of Conformity under the AI Act; they serve as evidence of regulatory engagement and learning.
Relevance for Healthcare AI
Healthcare AI systems, typically classified as high-risk under the AI Act, demand stringent compliance with transparency, safety, and data governance rules. Sandboxes represent a unique opportunity for healthcare developers to:
- Receive direct regulatory guidance during product development.
- Test algorithms and models using real-world clinical data under supervision.
- Leverage cost-free sandbox participation if qualifying as SMEs or startups.
- Obtain documented regulatory engagement that informs future market compliance strategies.
Timeline and Recommendations for Healthcare AI Developers
- Public Consultation Period: 2 December to 30 December 2025.
- Expected Finalization and Implementation: 2026.
Healthcare AI developers are encouraged to:
- Thoroughly review the draft Implementing Regulation to understand the operational details and potential impact on development timelines.
- Provide feedback during the consultation phase to highlight sector-specific concerns, such as clinical data handling and cross-border testing complexities.
- Assess eligibility for sandbox participation, particularly if developing AI solutions not yet on the market or undergoing significant updates.
Context and Significance
This regulation is a critical component of the EU AI Act’s risk-based governance framework, aiming to foster responsible innovation while protecting citizens’ rights and safety. For the healthcare sector, which faces unique challenges in balancing patient safety, data privacy, and rapid technological advances, the sandbox concept offers practical pathways to compliant AI innovation across Europe.
Engagement in this consultation presents a valuable chance for medical AI stakeholders to influence the final regulatory approach, ensuring that these sandboxes can effectively support the safe deployment of transformative healthcare technologies.
For expert guidance on navigating the evolving EU AI regulatory landscape and ensuring compliance for AI-enabled medical devices, visit MedQAIR, your trusted partner in medical device regulatory compliance.


