Your software changed. Does your clinical evidence still hold?
I reviewed a technical file last month where the manufacturer updated their SaMD to version 3.2. The clinical evaluation report referenced version 2.7. When I asked about the clinical impact assessment, they said the changes were “minor” and “didn’t affect safety.” The Notified Body disagreed. The file went into major deficiency status.
In This Article
- The Core Regulatory Requirement
- What Actually Constitutes a Change?
- The Clinical Impact Assessment Framework
- What Evidence Is Actually Required?
- The Role of PMCF in Software Updates
- Documentation and Traceability
- When Equivalence No Longer Holds
- How Notified Bodies Review Software Changes
- Practical Steps for Manufacturers
- The Cost of Getting This Wrong
- Final Thought
This scenario repeats across the industry. Software changes constantly. Patches, updates, algorithm refinements, UI modifications. Some are trivial. Some fundamentally alter the clinical performance or safety profile. The problem is not the change itself. The problem is the absence of a systematic framework to determine when new clinical evidence is required.
Under MDR, clinical evaluation is not a one-time exercise. It is a continuous process that must reflect the current state of the device. When software changes, the clinical evaluation must be reassessed. But how do you decide if reassessment means updating a paragraph or conducting new clinical investigations?
Most manufacturers rely on intuition or internal risk assessment. That is not enough. MDCG 2020-1 and MDCG 2020-3 provide guidance, but the application to software remains unclear for many teams. Let me walk you through how this actually works in practice.
The Core Regulatory Requirement
Article 61(11) of MDR requires manufacturers to update clinical evaluation throughout the lifecycle of the device. This is not optional. It applies to all changes, including software modifications.
The clinical evaluation report must reflect the device as it is currently manufactured and placed on the market. If your CER describes version 2.7 and you are now marketing version 3.2, you have a compliance gap. The question is how significant that gap is and what evidence is needed to close it.
MDCG 2020-1 clarifies that clinical evaluation is dynamic. It must incorporate new data from PMCF, literature, and any changes to the device. For software, this means every version change triggers a review of clinical relevance.
Version control and clinical evaluation are not separate processes. Every software release should include a documented clinical impact assessment before the CER is updated.
What Actually Constitutes a Change?
Not all software updates are equal. A bug fix that corrects a display error is not the same as a change to the algorithm that calculates diagnostic results. The challenge is defining where the line is.
From a regulatory perspective, the critical distinction is whether the change affects the intended purpose, clinical claims, indications for use, risk profile, or clinical performance. If any of these are impacted, the clinical evaluation must be updated with new evidence or analysis.
In practice, I see manufacturers struggle with intermediate cases. They update the user interface. They optimize processing speed. They add a data export function. Each of these may seem neutral. But consider the clinical context.
If the UI change alters how a clinician interprets results, it affects usability and potentially safety. If processing speed changes diagnostic workflow, it may introduce new use errors. If data export enables integration with other systems, it expands the risk environment.
The question is not whether the change is “technical” or “clinical.” The question is whether the change alters the clinical evidence base that supports the device.
Manufacturers classify changes as “software only” or “non-clinical” without performing a structured clinical impact assessment. This leads to outdated CERs and major findings during audits.
The Clinical Impact Assessment Framework
The framework I use is built on three questions. These are not complex, but they require honest answers.
First question: Does the change affect the intended purpose or indications for use?
If yes, you need new clinical evidence. The device is now different in scope. Existing literature and clinical data may not cover the new use case. Equivalence claims are likely invalid. You need clinical investigations or new literature searches scoped to the updated indication.
Second question: Does the change affect the safety profile or risk classification?
If yes, the risk management file must be updated, and the clinical evaluation must address the new or modified risks. This often requires additional clinical data to demonstrate that the risks are acceptable and that residual risks are outweighed by clinical benefits.
Third question: Does the change affect clinical performance or how clinical claims are supported?
This is the most frequent trigger. Algorithm changes, sensitivity/specificity modifications, changes in data inputs or outputs—all of these affect performance. If performance changes, the clinical evidence supporting your claims must be re-evaluated. You may need new performance studies, new clinical data, or updated post-market surveillance evidence.
If the answer to all three questions is no, you may proceed with a documented rationale in the CER explaining why the change does not require new evidence. But this rationale must be explicit, traceable, and technically justified.
What Evidence Is Actually Required?
When new evidence is needed, the type and extent depend on the magnitude of the change. MDCG 2020-3 provides the framework for equivalence and clinical investigations, but software changes often fall into gray areas.
For minor changes that do not affect core performance—such as bug fixes with no clinical impact—a documented review of existing data and PMCF is usually sufficient. The CER is updated with a section explaining the change, the clinical impact assessment, and the conclusion that existing evidence remains valid.
For moderate changes—such as algorithm refinements that alter sensitivity or specificity within acceptable ranges—you need performance validation data. This may come from bench testing, simulation, or retrospective analysis of real-world data. The clinical evaluation must demonstrate that the change does not degrade safety or performance and that clinical benefits are maintained or improved.
For major changes—such as new algorithms, new indications, or expanded patient populations—you need new clinical investigations or robust clinical data from literature and registries. Equivalence is usually not applicable because the device is now substantially different.
The decision tree is not always clear. I work with manufacturers who spend months debating whether their change is “moderate” or “major.” The answer depends on clinical context, risk class, and how the Notified Body interprets the change.
The burden of proof is on the manufacturer. If there is ambiguity about whether new evidence is required, the safe path is to generate or source new data. Underestimating the need for evidence leads to audit findings and delays.
The Role of PMCF in Software Updates
PMCF is critical for software medical devices. Real-world performance data provides ongoing evidence that the device performs as intended across diverse clinical settings and patient populations.
When software changes, PMCF data becomes even more important. It serves as the bridge between pre-market evidence and post-change validation. If you have robust PMCF data showing stable performance across versions, you can use that data to support the argument that a moderate change does not require new clinical investigations.
But PMCF must be designed to capture version-specific data. If your PMCF plan does not track software versions, you lose the traceability needed to support your clinical evaluation updates. I see this frequently—PMCF data exists, but it is not granular enough to link performance to specific versions.
Manufacturers should structure PMCF to include version identifiers in all data collection. This allows you to analyze performance trends over versions, detect degradation, and generate evidence for clinical evaluation updates.
If a change introduces new risks or performance characteristics, the PMCF plan must be updated to specifically monitor those aspects. This is not automatic. It requires a deliberate review of the PMCF plan each time the software changes.
PMCF plans are static and do not adapt to software changes. This results in a disconnect between the evolving device and the evidence being collected to support it.
Documentation and Traceability
Every software change must leave a traceable record in the technical documentation. The clinical evaluation report must reference the version being evaluated. The risk management file must reflect the version-specific risks. The PMCF plan and reports must align with the version timeline.
In practice, this means version control is a clinical documentation issue, not just a software development issue. I review files where version history is buried in design documents, but the CER references “the device” without specifying which version. This creates ambiguity and raises questions during audits.
The CER should include a version history section that lists all significant changes, the clinical impact assessment for each, and the evidence updates performed. This section should be updated with each new version.
Traceability also extends to Notified Body communication. If you submit a version update as a significant change notification, the supporting documentation must clearly link the change to the updated clinical evaluation. If the Notified Body approved version 2.7 and you are now submitting version 3.2, they need to see the clinical justification for the delta.
When Equivalence No Longer Holds
Many SaMD manufacturers rely on equivalence to similar devices for their initial clinical evaluation. This is valid under MDCG 2020-5, provided the equivalence criteria are met.
But equivalence is fragile. When your software changes, you must reassess whether equivalence still holds. If your algorithm is now different, if your performance characteristics have shifted, or if your intended use has expanded, the equivalence claim may no longer be valid.
If equivalence breaks, you need to generate your own clinical data. This is a significant regulatory shift. It may require clinical investigations, additional literature reviews scoped to your specific device, or extensive PMCF data demonstrating safety and performance.
I have seen manufacturers continue to rely on equivalence through multiple software updates without reassessing validity. This is a critical mistake. Once equivalence is lost, the entire clinical evidence base is undermined.
The clinical evaluation must explicitly document why equivalence is maintained or why it is no longer applicable. If it is no longer applicable, the path forward must be clearly defined.
How Notified Bodies Review Software Changes
Notified Bodies are increasingly sophisticated in their review of software medical devices. They understand that software changes can fundamentally alter the device without physical modification.
During audits, they look for the clinical impact assessment. They check whether version changes are documented in the CER. They verify that PMCF data is version-specific. They assess whether the clinical evidence is current and reflects the marketed device.
If they find a mismatch—such as a CER describing an old version while a new version is on the market—they issue deficiencies. These deficiencies often require substantial rework, including new evidence generation or updated clinical investigations.
The key is proactive alignment. Update the CER before or concurrent with the version release. Notify the Notified Body of significant changes with full clinical justification. Do not wait until the next surveillance audit to address version drift.
Notified Bodies expect real-time alignment between software versions and clinical documentation. Delayed updates signal weak lifecycle management and trigger deeper scrutiny.
Practical Steps for Manufacturers
First, establish a change control process that includes a clinical impact assessment gate. Every software change request should be evaluated for clinical relevance before approval. This assessment should answer the three questions I outlined earlier.
Second, integrate version tracking into your clinical evaluation process. The CER should have a living version history section. Update it with each release. Document the clinical justification for each version.
Third, align PMCF data collection with version releases. Ensure PMCF captures version-specific performance. Use this data to support clinical evaluation updates and demonstrate ongoing conformity.
Fourth, engage your Notified Body early when a significant change is planned. Do not assume internal classification of “minor” will be accepted. Seek alignment on the clinical evidence requirements before making the change.
Fifth, train your development and regulatory teams to think clinically about software changes. A change that seems purely technical from a coding perspective may have clinical implications. Cross-functional review is essential.
These steps are not administrative overhead. They are how you maintain MDR compliance in a dynamic software environment.
The Cost of Getting This Wrong
The consequences of mismanaging software changes in clinical evaluation are not theoretical. Files get suspended. Market access is delayed. Post-market surveillance orders are issued. In extreme cases, devices are recalled.
More commonly, the cost is time and resource drain. Correcting a misaligned CER after the fact requires retrospective evidence generation, updated literature reviews, and potentially new clinical investigations. This can take six to twelve months and consume significant budget.
The preventable part is the lack of process. Most deficiencies I see are not due to lack of data—they are due to lack of documentation and lack of systematic thinking about software version changes.
Manufacturers who integrate clinical evaluation into their software lifecycle from the start avoid these issues. Those who treat clinical evaluation as a separate, periodic activity struggle continuously.
Final Thought
Software will change. That is the nature of the technology. The question is whether your clinical evaluation keeps pace.
If you update your device faster than you update your clinical evidence, you create regulatory risk. If you treat version changes as purely technical without clinical assessment, you undermine your compliance.
The framework is not complex. Assess clinical impact. Update evidence as needed. Document everything. Align with your Notified Body. These steps protect your market access and demonstrate lifecycle conformity.
Next in this series, I will address how to structure PMCF for SaMD when the device is constantly evolving. The challenge is designing evidence collection that remains meaningful across versions.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-1, MDCG 2020-3
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61(11)
– MDCG 2020-1: Guidance on Clinical Evaluation
– MDCG 2020-3: Clinical Evaluation Assessment Report Template
– MDCG 2020-5: Clinical Evaluation – Equivalence
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





