Why Most PMCF Study Designs Fail Before They Start
I reviewed a PMCF study protocol last month that had been rejected three times. The manufacturer had a registry, a questionnaire, and a literature review plan. On paper, it looked complete. But the study design answered none of the clinical questions the device actually needed to address. This is not rare.
In This Article
The problem starts with a fundamental misunderstanding of what PMCF is for. Many teams treat it as a compliance checkbox. They design a study because MDR Article 61 and Annex XIV Part B require one. They submit something that looks like a study. But when a Notified Body or competent authority reviews it, the disconnect becomes obvious.
PMCF is not about collecting data. It is about answering specific clinical questions that remain open after the initial clinical evaluation. If your study design does not map directly to those questions, it will not pass review. And if it does pass initially, it will fail during the assessment of results.
The Purpose Behind PMCF Study Design
MDR Article 61 defines PMCF as a continuous process. It requires manufacturers to proactively collect and evaluate clinical data to confirm safety and performance, identify emerging risks, and update the benefit-risk determination throughout the device lifecycle.
MDCG 2020-7 clarifies this further. PMCF is not surveillance alone. It is an extension of the clinical evaluation. It addresses residual uncertainties from the pre-market phase. It validates assumptions made during the initial assessment. And it detects changes in clinical practice or patient populations that might alter the risk profile.
Here is what this means for study design. Your PMCF study must start from the clinical evaluation report. You identify gaps. You define questions. You choose methods that can actually answer those questions. The design flows from the clinical need, not from what is easiest to implement.
A PMCF study that is not directly traceable to specific gaps or uncertainties in your CER will be flagged during review. Reviewers will ask what clinical question each data point is meant to answer. If you cannot answer that, the study design is invalid.
Types of PMCF Studies Under MDR
MDR does not mandate specific study types. But it does require that the methods chosen are appropriate for the clinical questions. In practice, PMCF studies fall into two broad categories: those that generate new clinical data and those that systematically evaluate existing data.
Prospective Data Collection
This includes registries, surveys, follow-up studies, and observational cohorts. These methods are used when the clinical question cannot be answered with existing data. For example, if your device has a new indication, a new patient population, or a modification that affects performance, you need fresh data.
Registries are the most common choice. They allow long-term tracking of real-world outcomes. But many registry protocols I see are too broad. They collect everything and answer nothing. A good registry protocol defines specific endpoints tied to the residual risks identified in your clinical evaluation.
Surveys and questionnaires are often misused. They are appropriate for patient-reported outcomes or usability validation. But if your clinical question is about long-term complication rates or device durability, a survey will not provide the evidence a reviewer expects.
Systematic Literature Review
This is the most misunderstood PMCF method. A systematic literature review is not the same as the literature review you did for the initial clinical evaluation. It is a structured, ongoing process to monitor new publications about your device, equivalent devices, or the clinical condition.
MDCG 2020-6 explains the methodology. You define search terms, databases, inclusion criteria, and review intervals. You document each cycle. And you analyze whether new data changes your benefit-risk assessment.
But here is where it goes wrong. Many manufacturers treat this as a passive activity. They search once per year, find nothing new, and declare PMCF complete. That is not enough. The review must be designed to detect signals. If your search never finds anything, your search strategy is probably too narrow.
Notified Bodies frequently reject PMCF plans that rely solely on literature review when the device has low clinical evidence or when equivalent devices are not well-documented. If your pre-market clinical evaluation already struggled with literature gaps, your PMCF cannot rely on the same approach.
Choosing the Right Method for Your Device
The method you choose depends on three factors: the clinical question, the availability of existing data, and the feasibility of new data collection. There is no standard formula. But there are patterns that work and patterns that fail review.
If your device has a well-established predicate with extensive post-market data, a systematic literature review may be sufficient. But only if that data actually covers the clinical questions relevant to your device. If your device differs in design, material, or intended use, equivalence breaks down and you need device-specific data.
If your device is novel, has limited pre-market data, or is used in a vulnerable population, prospective data collection is almost always required. Reviewers will not accept a plan that defers evidence generation indefinitely.
If your device has a known residual risk, your PMCF must be designed to monitor that risk specifically. For example, if your clinical evaluation identified a potential long-term complication but lacked data to quantify it, your PMCF study must track that complication in real-world use. A generic registry that captures general outcomes will not satisfy this requirement.
And here is the part many teams miss. You can combine methods. A registry for new data collection and a systematic review for broader context. A survey for usability and a cohort study for clinical outcomes. The key is that each method addresses a specific question and the combination covers all your gaps.
MDR Requirements for Study Design
Annex XIV Part B lays out what must be in a PMCF plan. But the regulation is high-level. The real requirements come from how Notified Bodies and competent authorities interpret those articles during review.
Clearly Defined Objectives
Your study objectives must be specific and measurable. “To confirm safety and performance” is not an objective. “To assess the rate of device-related complications at 12 months in patients with indication X” is an objective.
Each objective must link back to a gap or uncertainty in your CER. Reviewers will cross-check. If an objective appears in your PMCF plan but is not justified in your clinical evaluation, it raises questions. And if a gap in your CER has no corresponding PMCF objective, that is a deficiency.
Appropriate Methodology
Your methodology must be rigorous enough to produce valid conclusions. For prospective studies, this means defining sample size, inclusion criteria, endpoints, and analysis plans. You do not need the rigor of a clinical trial. But you need enough structure to ensure the data is interpretable.
For systematic reviews, you need a protocol that defines search strategy, selection criteria, and appraisal methods. MDCG 2020-6 provides the framework. If your plan does not follow that framework, expect pushback.
Feasibility and Timeline
A study that cannot realistically be executed is not a study. If your plan assumes enrollment of 500 patients per year but your annual sales are 50 devices, the numbers do not work. Reviewers notice this.
Timelines must be realistic. If your study requires long-term follow-up, your plan must explain how you will collect data over time. If you claim 5-year outcomes but have no mechanism to contact patients after the first year, the design is flawed.
Feasibility is not just about logistics. It is about whether the study can produce meaningful data within a timeframe that allows you to act on the findings. If your study will take 10 years to complete but your device has a 5-year design lifecycle, you have a problem.
Common Mistakes in PMCF Study Design
I see the same design errors repeatedly across different manufacturers and device types. These mistakes are predictable. And they are preventable.
Designing the Study After the Plan is Written
Many teams write a PMCF plan to satisfy the technical documentation requirement, then try to figure out how to execute it later. This backward approach creates plans that are not implementable. Design the study first. Document it second.
Generic Protocols
Cookie-cutter PMCF protocols fail because they do not address device-specific questions. A protocol template might save time, but it will not survive review if it does not reflect the actual clinical profile of your device.
No Link to Risk Management
Your PMCF plan must connect to your risk management file. If your risk analysis identifies a residual risk, your PMCF must monitor that risk. If those connections are not explicit, reviewers will ask why.
Ignoring Interim Analysis
PMCF is continuous. You cannot design a study, collect data for five years, and analyze it once at the end. Your plan must define interim review points. You must explain how findings will be evaluated and acted upon. If a safety signal emerges at year two, waiting until year five to analyze it is not acceptable.
Plans that do not define interim analysis or that set review intervals longer than PMS reporting cycles are flagged. Reviewers expect PMCF data to feed into periodic safety update reports and updated clinical evaluations on a defined schedule.
What Happens When the Design is Wrong
A deficient PMCF study design has consequences beyond the initial rejection. If your plan is approved but the design is flawed, you will collect data that cannot answer your clinical questions. When you submit your PMCF evaluation report, the Notified Body will identify the gap. And you will need to start over.
This happens more than it should. A manufacturer implements a registry for three years. They submit results. The reviewer asks how the data addresses a specific residual risk. The manufacturer realizes the registry did not capture the right endpoints. All that time and effort produced unusable evidence.
Worse, if your PMCF fails to detect a safety issue that later emerges through vigilance, you face scrutiny over why your post-market clinical follow-up was inadequate. The study design becomes part of the investigation.
This is why the design phase is critical. You cannot fix a flawed study after data collection begins. Get the design right before you start.
Practical Steps for Designing a Valid PMCF Study
Start with your clinical evaluation report. Identify every statement that includes uncertainty, limited data, or assumptions. These are your PMCF triggers. For each one, ask whether new data is needed or whether systematic monitoring of existing data is sufficient.
Define specific clinical questions for each trigger. Write them as questions that have measurable answers. “Is the device safe?” is not a question. “What is the incidence of infection within 30 days of implantation?” is a question.
Choose methods that can answer those questions. If you need device-specific outcomes, you need prospective data collection. If you need to monitor the evolving state of the art, you need systematic literature review. Match the method to the question.
Define your endpoints, sample size, and timeline. Be realistic. If you cannot enroll enough patients, adjust your approach. If the timeline is too long, consider interim milestones.
Document the rationale for every design choice. Explain why this method answers this question. Connect each element back to your CER. Make the logic explicit.
And involve your clinical evaluation team in the design. PMCF is not a regulatory task separate from clinical evaluation. It is the continuation of it. The same people who wrote your CER should guide your PMCF study design.
When manufacturers approach PMCF study design as an evidence generation strategy rather than a compliance obligation, the quality improves dramatically. The studies produce useful data. The reports are easier to write. And the reviews go smoother.
The mindset shift is simple but not easy. It requires thinking about what you actually need to know, not just what you need to document. But once that shift happens, PMCF becomes a tool, not a burden.
Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report).
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
– MDR 2017/745 Article 61 and Annex XIV Part B
– MDCG 2020-7 Post-Market Clinical Follow-up (PMCF) Evaluation Report
– MDCG 2020-6 Regulation (EU) 2017/745: Clinical evidence needed for medical devices previously CE marked under Directives 93/42/EEC or 90/385/EEC





