Why your acceptance criteria fail at Notified Body review

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

You spent months building your clinical evaluation. The data looks solid. The literature review is complete. Then the Notified Body sends back a major finding: your acceptance criteria are not aligned with the claimed benefits, or they do not reflect the actual risks identified. This is not rare. It happens in almost half of the submissions I review.

The issue is rarely about the quality of your clinical data. It is about how you framed the criteria by which that data should be judged. Acceptance criteria are the bridge between what you claim about your device and what the evidence must demonstrate. If that bridge is unclear, incomplete, or misaligned, the entire clinical evaluation structure collapses under scrutiny.

This post explains how to define acceptance criteria that survive regulatory review. Not the theory from guidance documents. The reasoning that reviewers apply when they decide whether your criteria are fit for purpose.

What Acceptance Criteria Actually Are

Acceptance criteria define the thresholds, benchmarks, or outcomes that clinical data must meet to demonstrate that your device is safe and performs as intended. They are specific. They are measurable. They are tied directly to the claims you make and the risks you identified.

They are not vague statements like “device must be safe and effective.” They are not copy-pasted benchmarks from another device. They are not afterthoughts added to satisfy a checklist.

Acceptance criteria answer a fundamental question: what level of evidence, what type of performance, what safety margin is sufficient to justify placing this device on the market for the intended purpose?

Key Insight
Acceptance criteria are the regulatory contract between your claims and your evidence. If the criteria are weak, the contract is invalid, regardless of how much data you provide.

Why Notified Bodies Reject Poorly Defined Criteria

Reviewers look at acceptance criteria early in the assessment. They use them as a lens through which all subsequent clinical data is interpreted. If the criteria are unclear or misaligned, the reviewer cannot determine whether the evidence is sufficient.

Here is what goes wrong in practice:

The criteria are too vague. Statements like “the device must demonstrate acceptable performance” tell the reviewer nothing. Acceptable to whom? Based on what standard? Compared to what benchmark?

The criteria do not match the intended purpose. You claim your device improves patient mobility after surgery. But your criteria focus only on implant survival rates. The reviewer sees a disconnect. The claim is not supported by what you are measuring.

The criteria ignore identified risks. Your risk management file lists infection as a significant residual risk. But your acceptance criteria say nothing about infection rates or how you will evaluate them. The reviewer flags this immediately.

The criteria are not justified. You set a benchmark of 95% success rate. But you do not explain why 95% is acceptable. Is it based on clinical guidelines? Comparable devices? Literature standards? Without justification, the threshold appears arbitrary.

Common Deficiency
Manufacturers define acceptance criteria in isolation from the risk management file. The criteria focus on performance endpoints but do not address the safety thresholds needed to control residual risks. This creates a gap that reviewers exploit.

Where Acceptance Criteria Come From

Acceptance criteria are not invented. They are derived. They come from three primary sources, and your clinical evaluation must show that derivation clearly.

First source: The intended purpose and claims. Every claim you make generates a requirement for evidence. If you claim the device reduces recovery time, your acceptance criteria must define what reduction is clinically meaningful and how you will measure it.

Second source: The risk management file. Every residual risk generates a requirement for safety evidence. If tissue damage is a residual risk, your acceptance criteria must define the acceptable incidence, severity, and detection method.

Third source: The state of the art. Clinical practice, published guidelines, and performance of comparable devices establish benchmarks. Your acceptance criteria must reflect what is currently considered acceptable in the clinical community for similar indications.

If your criteria do not trace back to these three sources, they are unsupported. And unsupported criteria are rejected.

The Structure That Works

Acceptance criteria should be organized by objective. Each objective corresponds to a claim or a residual risk. Each objective has specific, measurable criteria.

Here is the structure I recommend:

Objective: State what you are trying to demonstrate. Example: “Demonstrate that the device achieves adequate fixation in the target anatomy.”

Parameter: Define what you will measure. Example: “Implant migration at 6 months post-implantation.”

Threshold: Define the acceptable range or limit. Example: “Migration less than 2 mm in 95% of patients.”

Justification: Explain why this threshold is appropriate. Example: “Based on clinical guideline X, migration above 2 mm is associated with increased revision risk. Devices in the same class report migration rates below this threshold in 93-97% of cases.”

This structure makes your reasoning transparent. The reviewer can see what you are measuring, why you are measuring it, and how you decided what is acceptable.

Key Insight
Justification is not optional. Every threshold you set must be defended with reference to clinical practice, published data, or regulatory precedent. If you cannot justify the threshold, do not use it.

How to Align Criteria With Risks

One of the most frequent gaps I see is the disconnect between the risk management file and the acceptance criteria. The manufacturer identifies risks, assigns residual risk levels, and then never translates those risks into measurable safety criteria.

Here is the logic that reviewers apply:

If a residual risk is classified as medium or high, there must be an acceptance criterion that addresses that risk. The criterion must define what level of harm is acceptable, how it will be detected, and what the threshold is for regulatory action.

For example, if nerve damage is a residual risk with medium severity, your acceptance criteria should state: “Incidence of transient nerve damage less than 5%, with no permanent nerve damage reported.” The threshold should be justified by reference to similar procedures or devices.

If the risk management file lists a risk but the acceptance criteria are silent, the reviewer sees an incomplete evaluation. This is a major finding.

Common Mistakes That Trigger Findings

After reviewing hundreds of clinical evaluation reports, certain mistakes appear repeatedly. They are predictable. They are avoidable.

Using criteria from another device without adaptation. You take acceptance criteria from a predicate device and paste them into your report. But your device has different risks, different claims, or a different patient population. The criteria no longer fit. The reviewer sees the mismatch.

Setting criteria after collecting the data. You run a clinical investigation, analyze the results, and then define acceptance criteria that match what you observed. This is reverse engineering. The criteria must be defined before data collection, as stated in the clinical investigation plan. If they are not, the evaluation is not credible.

Ignoring negative outcomes. Your acceptance criteria address success rates but say nothing about adverse events, device failures, or patient complaints. Safety is not optional. If your criteria do not cover safety endpoints, they are incomplete.

Failing to update criteria when risks change. Your risk management file is updated during post-market surveillance. New risks emerge. But your acceptance criteria remain unchanged. The clinical evaluation no longer reflects the current risk profile. This creates a gap that auditors will find.

Common Deficiency
Manufacturers define acceptance criteria once, at the time of the initial clinical evaluation, and never revisit them. As the risk profile evolves and new data emerges, the criteria become outdated. This is especially problematic during PMCF updates and periodic safety update reports.

How State of the Art Informs Your Criteria

Acceptance criteria cannot exist in a vacuum. They must reflect what is currently considered acceptable performance and safety in clinical practice. This is where the state of the art analysis becomes critical.

The state of the art tells you what comparable devices achieve, what clinical guidelines recommend, and what patients and clinicians expect. Your acceptance criteria must be consistent with this context.

If comparable devices achieve a 98% success rate, and you set your threshold at 85%, the reviewer will ask why your standard is lower. If clinical guidelines recommend a complication rate below 3%, and your threshold is 7%, you must justify the difference.

This does not mean your criteria must match the best-performing device on the market. It means your criteria must be defensible in the context of current practice. If your device addresses a different population, a different indication, or a different clinical setting, explain that. Show how your criteria reflect the specific context.

Documentation That Supports Your Criteria

Acceptance criteria do not stand alone. They must be supported by a clear documentation trail that shows how they were derived and why they are appropriate.

This documentation includes:

Clinical investigation plan: The acceptance criteria should be defined here, before any data is collected. The plan should explain the rationale for each criterion and how it will be evaluated.

Risk management file: The criteria must address the residual risks identified in the file. Cross-references should be explicit.

State of the art analysis: The criteria must be justified by reference to clinical practice, published benchmarks, and comparable devices. Cite specific studies, guidelines, or regulatory precedents.

Clinical evaluation report: The CER must show how the clinical data meets or exceeds the acceptance criteria. Any deviations must be analyzed and justified.

If this documentation trail is incomplete, the acceptance criteria appear arbitrary. And arbitrary criteria are not acceptable under MDR Article 61 and Annex XIV.

What Happens When Criteria Are Not Met

This is the question manufacturers avoid. What do you do when your clinical data does not meet the acceptance criteria you defined?

The answer is not to change the criteria. That destroys the integrity of the evaluation.

The answer is to analyze the gap, understand why it occurred, and determine whether the device is still safe and effective despite the deviation. This analysis must be transparent and documented.

You may conclude that the criterion was too conservative, and the observed performance is still clinically acceptable. You must justify that conclusion with clinical reasoning and reference data.

You may conclude that the deviation represents a real safety or performance issue. In that case, you must implement corrective actions, update the risk management file, and potentially restrict the intended purpose or add warnings.

What you cannot do is ignore the deviation. Reviewers check whether your conclusions are consistent with your criteria. If they are not, the evaluation is unreliable.

Key Insight
Acceptance criteria create accountability. They force you to define success before you see the data. That is uncomfortable. It is also the only way to produce a credible clinical evaluation that a Notified Body can trust.

Practical Steps to Define Your Criteria

Here is how to approach this in your next clinical evaluation or clinical investigation plan:

Step one: List every claim you make about the device. For each claim, identify what must be measured to support it.

Step two: Review your risk management file. For each residual risk with medium or high severity, identify what safety outcome must be monitored and what threshold is acceptable.

Step three: Conduct your state of the art analysis. Identify benchmarks from clinical guidelines, published studies, and comparable devices. Use these benchmarks to set your thresholds.

Step four: Document the rationale for each criterion. Explain why the threshold is appropriate, what it is based on, and how it will be evaluated.

Step five: Cross-check your criteria against your intended purpose, your risk management file, and your state of the art analysis. If there are gaps, fill them before you collect data.

This process takes time. It requires input from clinical experts, regulatory specialists, and quality assurance. But it is the only way to create acceptance criteria that will survive regulatory scrutiny.

Acceptance criteria are not an administrative exercise. They are the foundation of your clinical evaluation. If they are weak, everything built on top of them is unstable. If they are strong, the rest of the evaluation follows naturally.

Most manufacturers realize this too late. They define vague criteria, collect data, and then struggle to demonstrate that the evidence is sufficient. The Notified Body sends findings. The approval is delayed. The costs multiply.

The alternative is to define acceptance criteria correctly from the beginning. To make them specific, measurable, justified, and aligned with your claims and risks. To treat them as the regulatory contract they are.

That is how you pass review on the first submission.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– MDR 2017/745 Article 61, Annex XIV
– MDCG 2020-13: Clinical evaluation assessment report template
– MDCG 2020-5: Clinical evaluation
– MDCG 2020-6: Sufficient clinical evidence for legacy devices