Why your acceptance criteria keep getting rejected by reviewers
I reviewed a clinical evaluation last month where every endpoint had acceptance criteria. Clear thresholds. Statistical justification. The manufacturer had spent months on this. The Notified Body rejected it in the first round. Not because the data was weak. Because the criteria themselves could not survive scrutiny.
In This Article
This keeps happening. Teams invest enormous effort into defining acceptance criteria for clinical endpoints. They set specific thresholds. They document statistical rationale. They align with clinical guidelines. Then the reviewer asks one question that collapses the entire structure.
The question is simple: “Why is this threshold clinically acceptable?”
And suddenly the manufacturer realizes they built their entire case on a number they cannot defend.
The regulatory expectation behind acceptance criteria
The MDR does not require acceptance criteria in the formal sense. But it requires demonstration of acceptable benefit-risk. Article 61(1) states that clinical evaluation must confirm the device meets its safety and performance requirements under normal conditions of use.
This means you must show your device performs adequately. Not just that it performs. That the level of performance is clinically sufficient.
MDCG 2020-5 reinforces this. Clinical data must demonstrate that residual risks are acceptable when weighed against benefits. This requires boundaries. Clinical investigators need to know what outcomes justify continued use. Regulatory reviewers need to know what you consider acceptable.
So manufacturers define acceptance criteria. For complications. For technical success. For functional improvement. For adverse events.
The problem starts when these criteria are set without defensible clinical reasoning.
Acceptance criteria defined by statistical convenience or internal targets rather than clinical evidence of what constitutes acceptable device performance in the intended population.
What makes a criterion defensible
A defensible acceptance criterion has three components. First, it must be anchored in clinical evidence. Second, it must reflect the risk profile of the target population. Third, it must be consistent with the state of the art.
Let me show you what this means in practice.
A manufacturer develops a new vascular closure device. They set an acceptance criterion for major complications at less than 3%. When asked why 3%, they reference their statistical power calculation. That is not clinical justification. That is sample size planning.
The clinical justification would reference published complication rates for the standard of care in the same patient population. If manual compression shows major complications around 1.5% in large registries, your device must be non-inferior to that benchmark or demonstrate offsetting benefits that make slightly higher risk acceptable.
If surgical repair shows complications around 5%, but your device is meant for less invasive use in higher-risk patients, the comparison changes. Now your criterion might justifiably be higher than the general compression rate but still represent clinical improvement for your specific indication.
The criterion must fit the clinical context. Not your desired outcome.
The state of the art constraint
Here is where many criteria fail under scrutiny. The state of the art is not static. MDCG 2020-6 makes this clear. Your clinical evaluation must consider current knowledge and technology.
I see acceptance criteria that would have been reasonable five years ago but no longer reflect what is achievable with current devices. Reviewers know this. They track literature. They review multiple submissions in the same field.
When your threshold is worse than what competing devices demonstrate in published studies, you have a problem. Not a competitive problem. A regulatory problem. You are essentially asking to market a device with performance below what patients can already access.
Unless you have a specific rationale—different patient population, additional benefits, reduced invasiveness, cost considerations in specific healthcare settings—the criterion will not survive review.
Your acceptance criteria must be tighter than or equal to state of the art performance unless you can clinically justify why a worse outcome is acceptable given compensating benefits.
The hidden risk of too-tight criteria
Some manufacturers go the other direction. They set extremely tight acceptance criteria to signal confidence. Major complications less than 0.5%. Technical success greater than 99%. Adverse event rates below historical controls.
This creates a different trap.
If your study fails to meet these criteria, you now have documented evidence that your device does not perform acceptably according to your own definition. Your CER must address this failure. You cannot simply ignore it or redefine success post-hoc.
I have seen manufacturers try. They set aggressive criteria in protocols. The data comes in slightly above the threshold. Suddenly the CER argues the criterion was too conservative. Or that the result is still clinically meaningful. Or that subgroup analysis shows acceptable performance in certain patients.
Reviewers see through this immediately. It reads as outcome manipulation. It destroys credibility across the entire submission.
The solution is not to set loose criteria. It is to set criteria you can defend before you see the data. Criteria grounded in clinical evidence and realistic performance expectations.
The sample size tension
Here is the practical tension. Tight acceptance criteria require large sample sizes to demonstrate with adequate statistical power. Most manufacturers have limited resources for clinical investigation.
So they face a choice. Set the criterion where it should be clinically and run an underpowered study. Or adjust the criterion to make the study feasible and risk regulatory challenge.
Neither option is good. But the second option is worse.
An underpowered study that meets a clinically sound criterion can be supplemented with post-market data. PMCF can provide additional evidence. Real-world data can confirm performance. The criterion remains valid.
A study that meets an artificially relaxed criterion has proven nothing. You have evidence your device meets a standard you cannot defend. That evidence is not useful for regulatory approval. It is not useful for clinical decision-making. It is documentation without value.
Acceptance criteria adjusted during protocol development to match available sample size rather than clinical necessity, then reverse-justified with selective literature citations.
How reviewers evaluate your criteria
Understanding the reviewer perspective changes how you set criteria. Reviewers do not start by checking if your data meets your thresholds. They start by checking if your thresholds make clinical sense.
They look at your literature review. Do the cited studies support this criterion? Is the literature search comprehensive and current? Are you selectively citing favorable comparators?
They look at guidelines. Professional society recommendations. Clinical consensus statements. If your criterion contradicts established guidance without clear justification, they will challenge it.
They look at their own experience. Notified Bodies review multiple devices in the same category. They know what performance levels are achievable. They know what other manufacturers demonstrate. If your criterion is an outlier, they will question it.
And they look at consistency. Do your acceptance criteria align with your risk management? If you classify a complication as serious in your risk analysis but set a 5% acceptance rate in your study, the inconsistency signals confused thinking about benefit-risk balance.
The pre-submission advantage
This is where pre-submission meetings have real value. Not for negotiating easier criteria. For testing whether your rationale holds up under scrutiny.
Present your proposed acceptance criteria with your clinical justification before you finalize the protocol. See if the reviewer challenges them. If they do, that feedback is worth months of protocol revision and data collection.
I know manufacturers who skip this step to save time. They always lose more time later. Either in study amendments when they realize the criteria are unworkable. Or in review cycles when the criteria are rejected.
The criteria are the foundation. If the foundation is weak, nothing built on it will stand.
What happens when criteria cannot be defended
Let me be specific about the consequences. When your acceptance criteria are challenged and you cannot provide adequate clinical justification, several things happen.
First, the entire clinical evaluation becomes questionable. If the thresholds are arbitrary, the conclusions based on meeting those thresholds are meaningless. Your demonstration of acceptable benefit-risk collapses.
Second, you face a major non-conformity. Not a request for clarification. Not a minor gap. A fundamental deficiency in clinical evaluation methodology.
Third, you have limited options to resolve it. You cannot easily change acceptance criteria post-hoc without appearing to manipulate outcomes. You cannot claim different criteria were really intended. The documented criteria are what you are evaluated against.
Your options become: provide new evidence that the criteria are clinically sound, conduct additional studies with proper criteria, or withdraw and redesign your clinical investigation strategy.
All of these are expensive. All take time. All could have been avoided.
Acceptance criteria deficiencies cannot be fixed with better writing. They require either additional evidence or acknowledgment that your clinical data does not adequately demonstrate acceptable performance.
Building criteria that survive scrutiny
Start with the clinical question. What outcomes matter to patients and clinicians? Not what you can measure easily. What actually determines if this device should be used.
Then look at comparative evidence. What do current alternatives achieve? This must be systematic. Not cherry-picked studies. Not outdated references. A proper state of the art review.
Consider your risk profile. What risks does your device carry? What benefits does it provide? Where is the balance point? This should align directly with your risk management file.
Document the reasoning. Not after you set the criteria. While you set them. Create a clear record of the clinical evidence and logic that led to each threshold.
Test the criteria against clinical expertise. Not just your internal team. Independent clinical advisors who can challenge assumptions. They will ask the same questions reviewers will ask. Better to answer them now.
Finally, make sure the criteria are measurable and verifiable. Vague criteria like “clinically acceptable complication rate” are not criteria at all. Specific thresholds with clear definitions are.
The PMCF connection
Your acceptance criteria should connect directly to your PMCF plan. If a criterion matters enough to set it in your pivotal study, it matters enough to monitor post-market.
This connection tests whether the criteria are meaningful. If you are not willing to track an endpoint in PMCF, why was it an acceptance criterion? If you cannot monitor it post-market, how will you verify continued acceptable performance?
MDCG 2020-7 on PMCF makes clear that post-market data must confirm the conclusions of pre-market evaluation. Your acceptance criteria are those conclusions quantified. They must be traceable into your post-market surveillance.
Reviewers check this. Criteria that appear in the clinical evaluation but disappear from PMCF signal that the criteria were procedural rather than substantive. That they were set to pass review rather than to ensure clinical adequacy.
The long-term view
Acceptance criteria are not just regulatory requirements. They are clinical commitments. You are stating publicly what performance level you consider acceptable for your device.
If your device is approved and later shows performance worse than your stated criteria, you have a post-market problem. Not a statistical problem. A clinical and regulatory problem about whether the device should remain on the market.
This is why the criteria must be defensible not just at submission but throughout the device lifecycle. They become the benchmark for PMCF evaluation. For incident investigation. For benefit-risk reassessment. For comparison with emerging alternatives.
I have watched manufacturers struggle with this years after approval. Their initial acceptance criteria were too tight or too loose or not clinically grounded. Now they have market data that challenges those criteria. They cannot easily revise them without triggering questions about why the device was approved in the first place.
The time to get this right is before you lock the protocol. Before you collect data. Before you make claims that become part of your regulatory commitment.
Once the data exists and the criteria are documented, your flexibility disappears. You live with what you defined.
So define carefully. Define based on clinical evidence. Define in a way you can defend not just today but five years from now when reviewers look at your PMCF data and ask if the device still meets its original acceptance criteria.
That question will come. Make sure you can answer it.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– Regulation (EU) 2017/745 (MDR), Article 61
– MDCG 2020-5: Clinical evaluation assessment of medical devices
– MDCG 2020-6: Sufficient clinical evidence for legacy devices
– MDCG 2020-7: Post-market clinical follow-up (PMCF) evaluation report





