Rule 11 looks simple until you justify the classification

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see manufacturers classify their software as Class I or IIa under Rule 11, then struggle for months to defend the decision. The rule itself is short. The justification is where most submissions collapse. The Notified Body asks for evidence, and suddenly the entire classification rationale unravels.

Rule 11 of the MDR addresses software classification in two parts. Paragraph (a) covers standalone software intended for diagnostic or therapeutic purposes. Paragraph (b) covers software that drives or influences devices. Both sections link classification to the intended purpose and the degree of patient risk.

The text of the rule feels straightforward. But when you sit down to write the justification, the simplicity disappears. You realize the rule forces you to make explicit what is usually implicit in device design.

The Core Problem with Rule 11

Rule 11 requires you to explain how your software affects patient care and what happens if it fails. Most manufacturers describe what the software does. They do not describe what could go wrong or how serious the consequences would be.

A diagnostic support tool that flags abnormal lab values might seem low-risk. But if a clinician relies on it to rule out a life-threatening condition, the risk profile changes. If the software misses a critical value, the patient could deteriorate without intervention.

This is where classification justifications break down. The manufacturer describes functionality. The Notified Body or competent authority asks about failure modes and clinical impact. The gap between those two perspectives becomes a deficiency.

Common Deficiency
Classification justifications that describe features and algorithms without addressing what happens when the software provides incorrect output or fails to detect a condition.

Paragraph (a): Standalone Diagnostic or Therapeutic Software

Paragraph (a) applies to software intended to provide information used to make diagnostic or therapeutic decisions. The classification depends on whether decisions could cause serious deterioration of health, lead to surgical intervention, or result in death.

The challenge is defining what counts as serious deterioration. Some manufacturers interpret this narrowly. They argue that their software only provides supplementary information, so the clinician remains responsible for the final decision.

But the regulation does not ask whether the software makes the final decision. It asks whether the information could lead to decisions with serious consequences. If a clinician acts on incorrect software output, the outcome can be severe even if the software was labeled as decision support.

I have reviewed files where manufacturers classified imaging analysis software as Class I because it did not claim to diagnose. The software highlighted regions of interest for radiologists. The manufacturer argued this was purely informational.

The problem emerged during clinical evaluation. The SOTA review showed similar software products classified as Class IIa or IIb. The reasoning was clear: if the software fails to highlight a malignant lesion, the radiologist might miss it. That could delay diagnosis and treatment, leading to serious patient harm.

The classification was challenged. The manufacturer had to reclassify and repeat the conformity assessment under a different pathway. Months of work redone because the initial justification did not address the actual clinical risk.

Key Insight
Classification under Rule 11(a) depends on the consequences of incorrect information, not on whether the software claims diagnostic authority. Focus your justification on failure scenarios and clinical impact.

Paragraph (b): Software Driving or Influencing Devices

Paragraph (b) covers software that drives a device or influences its use. The classification matches the device being driven or influenced. This seems mechanical until you encounter edge cases.

What if the software influences multiple devices with different classifications? What if it only influences part of the device function? What if the software provides optional features that the user can disable?

The rule says the software takes the classification of the device. But which device? And what if the influence is indirect?

I worked on a case involving software that managed infusion pump settings. The software did not control the pump directly. It calculated recommended infusion rates based on patient data and transmitted those recommendations to the clinical team. The team programmed the pump manually.

The manufacturer initially argued the software was Class I under Rule 11(a) because it provided information for therapeutic decisions. But the Notified Body questioned whether paragraph (b) applied since the software influenced the use of a Class IIb device.

The distinction mattered. Under paragraph (a), the manufacturer could argue the software only provided decision support. Under paragraph (b), the classification would automatically match the infusion pump.

The resolution required detailed risk analysis. We demonstrated that the software did not directly drive or control the pump. The clinical team reviewed and validated every recommendation before programming. The final classification remained under paragraph (a), but only after extensive documentation of the workflow and risk mitigation.

This case illustrates the problem. Rule 11 looks binary, but application requires judgment. And judgment requires documentation.

Common Deficiency
Failing to address whether software that interacts with other devices falls under paragraph (a) or (b), especially when the interaction is informational rather than direct control.

The Intended Purpose Trap

Rule 11 classification hinges on intended purpose. But intended purpose is not just what you write in the instructions for use. It includes how the device is presented, marketed, and actually used.

Some manufacturers try to downplay functionality to achieve a lower classification. They describe the software as informational or educational, even though clinicians use it to make treatment decisions.

This approach fails during clinical evaluation. The SOTA review reveals how similar devices are actually used. The PMCF data shows how your own device is used in practice. If the real-world use contradicts the stated intended purpose, the classification becomes indefensible.

I reviewed a file where the manufacturer claimed the software was for educational purposes only. The risk management file and usability testing described clinical workflows where the software output directly influenced treatment planning. The inconsistency was obvious.

The Notified Body rejected the classification. The manufacturer had to rewrite the intended purpose, reclassify the device, and redo the conformity assessment. The attempt to minimize classification created more work, not less.

What the Justification Must Contain

A defensible Rule 11 classification justification must include several elements. First, a clear statement of intended purpose that reflects actual use. Second, identification of potential failure modes and their clinical consequences. Third, analysis of whether the software provides diagnostic or therapeutic information and the severity of decisions based on that information.

For paragraph (a), you need to explain what happens if the software provides incorrect output. What decision might a clinician make based on that output? What could result from that decision? This is not speculation. It should be grounded in clinical risk analysis.

For paragraph (b), you need to demonstrate how the software influences the device. Is the influence direct or indirect? Does the software control device operation or only provide input that users manually apply? The distinction determines which paragraph applies.

Both pathways require references to risk management documentation. The classification should align with the risk classification in ISO 14971 analysis. If the risk management file identifies serious patient harm as a potential consequence, the Rule 11 classification must reflect that.

I have seen files where the risk management identified severe risks, but the classification justification claimed Class I. The disconnect was immediate. Either the risk analysis was wrong, or the classification was wrong. Either way, the submission was incomplete.

Key Insight
Your Rule 11 classification justification must align with your risk management file. Inconsistencies between risk analysis and classification create immediate deficiencies that delay review.

MDCG Guidance on Software Classification

MDCG 2019-11 provides guidance on qualification and classification of software. It addresses when software qualifies as a medical device and how to apply classification rules, including Rule 11.

The guidance emphasizes that classification depends on the degree of impact on patient care. It provides examples, but the examples are not exhaustive. Your device will not match the examples exactly. You still need to reason through the classification based on your specific intended purpose and risk profile.

The guidance also clarifies that decision support software can be Class IIa or higher if the information significantly influences clinical decisions. The term “decision support” does not automatically mean low risk.

This is important because many manufacturers assume decision support equals Class I. The guidance makes clear that the classification depends on the severity of the decisions, not the label you apply to the software.

The Clinical Evaluation Connection

Rule 11 classification directly affects clinical evaluation requirements. Higher classification means more extensive clinical data requirements. If your classification is wrong, your clinical evaluation plan will be insufficient.

I have reviewed files where the manufacturer classified software as Class IIa, then provided clinical data appropriate for Class I. The clinical evaluation report did not address equivalence rigorously. It did not include sufficient clinical performance data. The justification assumed lower risk than the classification indicated.

The Notified Body identified the gap. The manufacturer had to generate additional clinical data or reconsider the classification. Both options required significant rework.

This is why classification must be resolved early and documented thoroughly. The clinical evaluation strategy depends on it. The entire conformity assessment timeline depends on it.

When Classification Becomes a Negotiation

In some cases, the initial classification is challenged during review. The Notified Body or competent authority questions the justification. The manufacturer defends the original classification or provides additional rationale.

This is not really negotiation. It is clarification. But it feels like negotiation because the outcome determines the assessment pathway.

The key is to have the documentation ready. If you can provide detailed risk analysis, clinical workflow descriptions, and references to similar devices, you can support your classification. If your justification is thin, you will be asked to reclassify or provide more evidence.

I have participated in these discussions. The manufacturers who succeed are those who documented their reasoning from the beginning. They did not just pick a classification and hope it would pass. They built the case with risk data, clinical context, and regulatory logic.

The manufacturers who struggle are those who treated classification as a checkbox. They applied the rule superficially without addressing the underlying clinical risk. When challenged, they had nothing to reference.

Practical Steps for Defensible Classification

Start with a thorough risk analysis before you classify. Identify all potential failure modes and their clinical consequences. Use that analysis to inform your classification decision.

Write the intended purpose clearly and honestly. Do not minimize functionality to achieve lower classification. If the device performs diagnostic or therapeutic functions, state that explicitly.

Document how you applied Rule 11. Explain why paragraph (a) or (b) applies. Reference specific risk scenarios. Show how you determined the severity of potential consequences.

Align your classification with the rest of your technical documentation. The risk management file, clinical evaluation plan, and usability documentation should all support the same classification.

Review similar devices on the market and their classifications. If your device has similar intended purpose and risk profile, the classification should align. If it does not, you need to explain why.

Finally, prepare for questions. The Notified Body or competent authority will challenge your classification if the justification is weak. Have the evidence ready to defend your decision or be prepared to reclassify if the evidence does not support your original choice.

Classification is not a formality. It is a technical decision with regulatory and clinical consequences. Rule 11 is short, but the justification requires depth. Do the work upfront, and the rest of the submission will be stronger for it.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR), Annex VIII Classification Rules
– MDCG 2019-11: Guidance on Qualification and Classification of Software
– ISO 14971: Application of risk management to medical devices

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.