Your PMCF study design might be solving the wrong problem

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

I see manufacturers spend months designing elaborate PMCF studies only to realize during the audit that the data collected does not answer the clinical questions that matter. The study ran smoothly. The data is clean. But it misses the point entirely. The problem is not execution. The problem is study selection.

The issue surfaces during the first exchange with the Notified Body. The reviewer reads the PMCF plan and asks: “Why did you choose a registry for this question?” or “A survey does not provide the clinical evidence you need here.” The manufacturer defends the choice based on feasibility. The reviewer is not questioning feasibility. They are questioning appropriateness.

This disconnect happens because many manufacturers approach PMCF study design backwards. They start with what is easy to implement rather than what the clinical evaluation actually requires. The result is a study that runs to completion but contributes nothing to the device’s evidence profile.

Understanding how to match study type to clinical question is not an academic exercise. It determines whether your PMCF plan will pass review, whether your data will be usable, and whether you will face major corrective actions later when gaps become obvious.

The Three Core Study Types in PMCF

MDCG 2020-7 and MDCG 2020-8 define the framework for post-market clinical follow-up. Within that framework, manufacturers typically deploy three study architectures: surveys, registries, and focused clinical studies. Each serves a different purpose. Each has different strengths and limitations.

The confusion starts when manufacturers treat these as interchangeable options. They are not. The structure of the study must align with the nature of the clinical question. Mismatch here creates data that looks complete but lacks evidentiary value.

Surveys: Perception and Reported Outcomes

Surveys are the most misused PMCF tool. I see them deployed as a default option because they are quick to set up and inexpensive to run. The manufacturer sends questionnaires to users or patients, collects responses, and considers the PMCF obligation fulfilled.

The problem is that surveys capture perception, not clinical reality. They are useful when the clinical question is about usability, satisfaction, or subjective experience. They become problematic when manufacturers try to use survey data to demonstrate safety or performance outcomes that require objective measurement.

A survey asking patients if they experienced pain after using a device is not the same as clinical documentation of adverse events. A survey asking clinicians if they found the device easy to use is not the same as validated human factors data. Notified Bodies recognize this distinction immediately.

Common Deficiency
Manufacturers use surveys to collect data on clinical performance or safety outcomes, then present response rates and satisfaction scores as evidence. Reviewers reject this because subjective perception does not substitute for objective clinical measurement.

Surveys work well for specific purposes. If your clinical evaluation identifies a gap in understanding real-world usability, a structured survey with validated questions can provide meaningful data. If you need to confirm that users understand instructions for use, surveys can demonstrate effectiveness of training materials. But if your gap is about safety outcomes, complication rates, or device performance in specific patient populations, surveys will not close it.

The selection criterion is simple: Does the clinical question require subjective feedback or objective measurement? If the answer is objective measurement, do not use a survey.

Registries: Long-Term Patterns and Comparative Context

Registries track defined patient populations over extended periods. They collect standardized data points across multiple sites or multiple devices. The strength of a registry is breadth and duration. The weakness is depth and specificity.

I see manufacturers choose registries when they need to demonstrate long-term safety or to compare their device performance against existing standards of care. This can be appropriate. But registries require significant infrastructure. They require consistent data collection protocols across sites. They require ongoing monitoring and quality control.

Many manufacturers underestimate this complexity. They set up a registry, collect data for a year, and then realize the data is too heterogeneous to analyze meaningfully. Different sites used different definitions. Different clinicians recorded different parameters. The registry produced a large dataset that cannot answer the clinical question because the data lacks consistency.

Registries work when the clinical question is about trends, patterns, or comparative performance over time. They work when you need to demonstrate that your device performs consistently across diverse clinical settings. They work when the outcome of interest is well-defined and universally measured.

Key Insight
A registry is not a passive data collection tool. It requires active governance, standardized protocols, and continuous quality assurance. If you cannot commit to this structure, a registry will not deliver usable evidence.

The selection criterion for registries: Does the clinical question require longitudinal data across a broad population? Can the outcome be measured consistently across sites? If the answer to either question is no, a registry is not the right tool.

Focused Clinical Studies: Specific Hypotheses and Controlled Conditions

Focused clinical studies test specific hypotheses under controlled conditions. They are designed to answer targeted questions with precision. They require clear protocols, defined endpoints, and rigorous follow-up.

These studies are more resource-intensive than surveys or registries. They require clinical sites, ethics approval, informed consent, and structured monitoring. Because of this, manufacturers often avoid them. But when the clinical evaluation identifies a specific gap that cannot be addressed through other means, a focused study is the only valid option.

I see this most often with equivalence claims. A manufacturer claims equivalence to a predicate device but cannot demonstrate that equivalence holds in their specific population. The gap is precise. The clinical question is clear: Does this device perform as well as the predicate in this indication? A survey will not answer that. A registry will not answer that. Only a controlled study with defined endpoints and comparative data will close the gap.

Focused studies also become necessary when there is a safety signal that requires investigation. If post-market surveillance identifies an unexpected adverse event pattern, a reactive PMCF study may be required to understand causality and frequency. This study must be designed to isolate variables and establish relationships. That requires control and specificity.

The selection criterion for focused studies: Is the clinical question specific and testable? Does it require controlled conditions to answer? If yes, a focused study is likely necessary despite the resource commitment.

How Clinical Questions Drive Study Selection

The mistake I see repeatedly is starting with study type and then retrofitting clinical questions to match. The process must run in the opposite direction. The clinical evaluation identifies gaps. Those gaps generate clinical questions. The nature of those questions determines the study type.

Consider a Class III implantable device with an equivalence claim to a predicate. The clinical evaluation identifies that while technical equivalence is demonstrated, there is limited data on long-term performance in the specific patient population the manufacturer targets. What study type addresses this?

A survey would capture patient-reported outcomes but would not provide objective performance data. A registry could work if the goal is to track long-term safety and performance trends across a broad population with standardized outcome measures. A focused study could work if the goal is to compare performance against the predicate in a controlled setting with defined endpoints.

The decision depends on what the clinical evaluation actually requires. If the equivalence claim needs validation, a focused comparative study is the answer. If the question is about long-term safety in diverse settings, a registry makes sense. If the question is about patient satisfaction or quality of life, a survey could contribute. But these are not interchangeable options.

Common Deficiency
Manufacturers design PMCF studies before finalizing the clinical evaluation. They choose a study type based on budget and timeline, then try to justify it retrospectively. Reviewers identify this immediately because the study does not align with the clinical gaps documented in the CER.

The process must be linear: Clinical evaluation identifies gaps. Gaps become clinical questions. Clinical questions determine study type. This sequence is not flexible. Reversing it creates evidence that appears complete but lacks regulatory validity.

Practical Considerations That Complicate Selection

In theory, matching study type to clinical question is straightforward. In practice, real-world constraints complicate the decision. Budget limitations, site availability, patient recruitment challenges, and time pressures all influence study design.

These constraints are real. But they cannot override the fundamental requirement that the study must answer the clinical question. A registry that is affordable but does not collect the right data is not a solution. A survey that is fast to deploy but provides subjective data when objective measurement is needed does not fulfill the PMCF obligation.

I have seen manufacturers choose a less appropriate study type because the ideal study is not feasible, then face significant delays when the Notified Body rejects the plan. The time saved upfront is lost later. The budget saved on study design is spent on corrective actions and resubmissions.

The pragmatic approach is to design the study that answers the clinical question, then find ways to make it feasible. If a focused study is required but resources are limited, consider phased enrollment or collaboration with academic centers. If a registry is needed but infrastructure is lacking, explore participation in existing registries rather than building from scratch.

Feasibility is a legitimate constraint. But it must be addressed within the framework of what is clinically required, not used as a reason to choose an inadequate study design.

What This Means for Your PMCF Strategy

The selection of study type is not a secondary decision. It is a strategic choice that determines whether your PMCF activities will generate usable evidence or create compliance without value.

Before you design any PMCF study, ask three questions:

First: What is the specific clinical question this study must answer? If you cannot articulate the question clearly, the study will not answer it.

Second: What type of data is required to answer that question? Subjective feedback, longitudinal trends, or controlled comparison? The answer determines the study architecture.

Third: Can this study type realistically deliver that data in your operational context? If not, what modifications are needed to make it feasible without compromising evidentiary value?

These questions must be answered before study design begins. They must be documented in the PMCF plan with clear rationale linking clinical questions to study selection. This rationale is what Notified Bodies evaluate. They are not assessing whether your study will run smoothly. They are assessing whether it will generate evidence that addresses the clinical gaps identified in your evaluation.

Key Insight
The quality of your PMCF data is determined at the study selection stage. A well-executed study of the wrong type produces clean data that has no evidentiary value. Fix the selection logic before you invest in execution.

The conversation around PMCF study types often focuses on logistics and implementation. But the critical decision happens earlier. It happens when you match the study architecture to the clinical question. Get that match right, and the rest becomes manageable. Get it wrong, and no amount of careful execution will make the data useful.

In the next part of this series, we will examine how to structure PMCF protocols that survive Notified Body scrutiny, focusing on endpoint selection and data quality requirements that determine whether your study results will be accepted as clinical evidence.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2020-7, MDCG 2020-8

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR) Annex XIV Part B
– MDCG 2020-7: Post-Market Clinical Follow-up (PMCF) Plan Template
– MDCG 2020-8: Post-Market Clinical Follow-up (PMCF) Evaluation Report Template

Study design is one component of the broader PMCF framework. For the complete picture, see our guide on PMCF plans and reports under MDR.

Related Resources

Read our complete guide to PMCF under EU MDR: PMCF Plan & Report under EU MDR

Or explore Complete Guide to Clinical Evaluation under EU MDR