When your SaMD fails audit because of a security patch

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

A manufacturer submitted a clinical evaluation report for a diagnostic SaMD. The device had strong clinical data. The equivalence claim was solid. The literature search was thorough. The Notified Body rejected the submission. The reason? The cybersecurity risk analysis was outdated after a third-party software update, and the clinical evaluation did not address the potential impact on clinical safety and performance.

This happens more often than most teams expect. Software as a Medical Device brings a regulatory reality that traditional hardware does not face: the clinical performance you validated last year might change next month because of a security patch, an OS update, or a vulnerability disclosure.

And when that happens, your clinical evaluation is no longer current.

The regulatory framework does not separate clinical and security

MDR Article 61 requires manufacturers to demonstrate safety and performance based on clinical evidence. MDCG 2020-1 reinforces that clinical evaluation must address all risks, including those related to cybersecurity, when they affect clinical safety or performance.

This is not a theoretical connection. If a security vulnerability can alter the output of your device, change the accuracy of a measurement, delay a critical alert, or expose patient data in a way that affects clinical decisions, then it is a clinical risk.

Your clinical evaluation report must reflect that.

But most manufacturers treat cybersecurity as an ISO 14971 topic handled by the risk management team. The clinical evaluation team rarely sees the cybersecurity documentation. The connection is not made until the Notified Body asks the question.

Common Deficiency
The clinical evaluation report does not reference the cybersecurity risk analysis. When a vulnerability is identified or a patch is applied, the manufacturer updates the risk file but does not reassess clinical safety and performance. The CER becomes outdated without anyone noticing.

Why this matters more for SaMD than for hardware

A pacemaker changes slowly. Its design is locked. Its clinical performance is validated once and remains stable for years unless there is a field safety corrective action.

Software is different. It runs on platforms you do not control. It integrates with third-party libraries. It depends on operating systems that receive updates outside your release cycle. A Windows patch, an API change, a deprecated protocol can all affect functionality.

And if functionality changes, performance might change. If performance changes, clinical safety might change. If clinical safety changes, your clinical evaluation is no longer valid.

This is not hypothetical. I have seen devices where a browser update changed how data was displayed, affecting interpretation. I have seen diagnostic algorithms that performed differently after an OS security patch altered floating-point precision. I have seen notification systems that stopped working because a third-party authentication service changed its API.

None of these were caught during routine post-market surveillance because they were not classified as complaints. They were discovered during audits or field observations.

The clinical evaluation must anticipate change

Your CER cannot assume the software remains static. MDCG 2020-1 requires that the clinical evaluation addresses the entire lifecycle, including foreseeable changes. For SaMD, this includes cybersecurity updates.

Here is what that means in practice:

First, your clinical evaluation plan must define how cybersecurity risks are monitored and how changes to those risks trigger reassessment of clinical safety and performance. This cannot be vague. It must specify who reviews security bulletins, who assesses clinical impact, and what threshold triggers an update to the CER.

Second, your State of the Art analysis must include the evolving cybersecurity landscape. If your device processes sensitive health data, the current best practices for encryption, authentication, and access control are part of the State of the Art. If those practices change, your device might fall behind.

Third, your PMCF plan must include active monitoring of cybersecurity incidents and patches. This is not just about logging complaints. It is about tracking every security update, every vulnerability disclosure, and every patch applied to assess whether clinical performance was affected.

Key Insight
The PMCF plan for SaMD should explicitly include a process for reviewing all cybersecurity patches and updates. Each review should document whether the patch could affect clinical safety or performance. If yes, a reassessment of clinical data is triggered. This creates a traceable link between cybersecurity risk management and clinical evaluation.

What Notified Bodies look for

When a Notified Body reviews a SaMD clinical evaluation, they check whether the manufacturer understands that software changes over time. They look for evidence that the clinical evaluation process accounts for this.

Here are the questions they ask:

Does the CER reference the cybersecurity risk analysis? Is there a clear link between identified vulnerabilities and potential clinical impact? If a vulnerability was found and patched, did the manufacturer reassess whether the patch affected performance?

Is the PMCF plan designed to capture security-related performance issues? Does it include monitoring of security bulletins, incident reports, and third-party dependency updates?

When a major update is applied, such as a migration to a new OS version or a switch to a new encryption standard, is there a process to reassess clinical equivalence and performance data?

Most manufacturers cannot answer these questions clearly. The cybersecurity team works in parallel to the clinical team. The risk file is updated. The CER is not.

The hidden risk of third-party dependencies

Many SaMD devices rely on third-party software components. Cloud platforms. AI frameworks. Database engines. Authentication services. These dependencies introduce a risk that most clinical evaluation reports ignore.

If a third-party library is updated and the manufacturer does not control the update schedule, the clinical performance of the device can change without a formal design change. The manufacturer might not even notice until a user reports unexpected behavior.

MDCG 2019-16 on cybersecurity emphasizes that manufacturers must maintain awareness of vulnerabilities in all software components, including third-party dependencies. But awareness is not enough. If a vulnerability or a patch affects clinical output, the clinical evaluation must be updated.

This is where the regulatory expectation becomes difficult to meet. You cannot freeze your software environment forever. You cannot ignore security patches. But every patch is a potential clinical change.

The only way to manage this is to build the reassessment process into your quality system. Every patch. Every update. Every dependency change gets reviewed for clinical impact. If impact is identified, the CER is updated. If not, the review is documented.

Key Insight
For SaMD, the clinical evaluation is not a one-time deliverable. It is a living process that reacts to every significant change in the software environment. This includes security patches, third-party updates, and infrastructure changes. If your QMS does not have a trigger mechanism linking these changes to clinical reassessment, you will fail to maintain compliance over time.

The practical challenge of maintaining current clinical evaluation

The regulatory expectation is clear. The practical execution is difficult. Most manufacturers do not have the resources to reassess clinical data after every security patch. Most clinical teams do not have visibility into the cybersecurity update schedule.

So what can you do?

Start by creating a formal link between your cybersecurity risk management process and your clinical evaluation process. When a vulnerability is identified or a patch is planned, the cybersecurity team must notify the clinical team. The clinical team reviews whether the change could affect safety or performance. If yes, reassessment is triggered. If no, the decision is documented.

This does not require a full CER update after every patch. It requires a documented assessment. Most of the time, the answer will be no impact. But when impact is identified, you need to act quickly.

Second, structure your PMCF plan to capture cybersecurity-related performance issues. This means defining specific data points related to security events, such as device downtime due to patches, performance degradation after updates, or user-reported issues following infrastructure changes.

Third, plan for major updates. When you migrate to a new platform, update a core dependency, or implement a significant security enhancement, treat it as a design change. Reassess clinical equivalence. Update the CER. Do not assume that clinical performance remains unchanged.

Why this matters now more than before

The regulatory pressure on cybersecurity is increasing. MDCG 2019-16 made cybersecurity a central part of the risk management process. The new EU Cyber Resilience Act will add further requirements. Notified Bodies are trained to look for gaps between cybersecurity documentation and clinical evaluation.

And as software becomes more complex, the risk of security-related clinical impact grows. AI-driven devices depend on data pipelines that can be disrupted by security measures. Cloud-based devices depend on infrastructure that changes regularly. Mobile apps depend on OS features that evolve rapidly.

If your clinical evaluation process does not account for this reality, you will face deficiencies. Not because your device is unsafe. But because your documentation does not demonstrate that you understand how cybersecurity affects clinical performance.

This is the unexpected intersection. Cybersecurity is not separate from clinical evaluation. It is part of it. And manufacturers who do not integrate the two will struggle to maintain compliance as software evolves.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2019-16, MDCG 2020-1

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61
– MDCG 2020-1: Guidance on clinical evaluation
– MDCG 2019-16: Guidance on cybersecurity for medical devices

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.