Why Most PMS Plans Fail Before They Even Start

Hatem Rabeh

Written by HATEM RABEH, MD, MSc Ing

Your Clinical Evaluation Expert And Partner

in
S

A PMS plan arrives on my desk. First page looks standard. Second page reveals the problem: post-market surveillance activities are listed, but nothing connects them to actual clinical risk. The manufacturer checked every box. But they missed the entire point. Under MDR Article 83, this is not about having a plan. It is about having a system that feeds clinical evaluation with actionable evidence.

I see this pattern in almost every technical documentation review. The PMS plan exists. It follows a template. It includes sections on complaint handling, vigilance, trend analysis, and literature review. Everything seems in order until you ask one question: How does this plan generate clinical evidence?

That question usually creates silence.

The truth is, most PMS plans are written as compliance documents. They satisfy the requirement to have a plan. But they do not deliver what MDR actually demands: a proactive system that continuously evaluates device safety and performance in real-world use.

The Regulatory Expectation Nobody Explains Clearly

MDR Article 83 does not ask for a list of surveillance activities. It requires a “proactive and systematic” process to collect, record, and analyze post-market data. The emphasis is on systematic. That means the plan must define how data flows from collection to evaluation to decision-making.

Most manufacturers stop at collection. They describe how they will monitor complaints. How they will track adverse events. How they will review scientific literature. But they do not explain what happens next. How is complaint data analyzed for clinical trends? How are literature findings integrated into benefit-risk assessment? How are PMCF results used to update clinical evaluation?

This is where MDCG 2020-7 becomes critical. It clarifies that the PMS plan is not standalone. It must be aligned with clinical evaluation. It must feed the ongoing process of confirming that clinical benefits continue to outweigh clinical risks.

Key Insight
A PMS plan is not a surveillance checklist. It is the operational framework that keeps clinical evaluation alive after the device reaches the market.

When Notified Bodies review PMS plans, they look for this connection. They want to see that every surveillance activity has a clinical purpose. If your plan lists “literature review” but does not specify how findings affect device safety conclusions, that is a deficiency. If it describes complaint handling but does not link complaint trends to residual risk evaluation, that is another deficiency.

What Manufacturers Consistently Underestimate

Three areas stand out as the most underestimated in PMS planning.

1. The Clinical Relevance Filter

Manufacturers collect data. But they rarely filter it for clinical relevance before it reaches clinical evaluation. A PMS plan should define which data points matter clinically. Not every complaint is clinically significant. Not every literature article changes the benefit-risk profile. The plan must establish thresholds, criteria, and triggers for escalation.

Without this filter, clinical evaluators drown in noise. Or worse, important signals get buried in routine data.

I worked on a file where the PMS plan required quarterly review of all complaints. Sounds responsible. But no criteria existed for what constituted a clinically relevant trend. After two years, the manufacturer had thousands of complaint records and no actionable clinical insight. The periodic safety update report repeated the same conclusion: “No safety concerns identified.” The Notified Body disagreed. The pattern was there. The system just was not designed to detect it.

2. The PMCF Integration Gap

PMS and PMCF are not the same. PMS is the overall surveillance system. PMCF is the clinical component within that system. Most PMS plans mention PMCF. Few explain how PMCF findings loop back into the broader PMS framework.

This creates a disconnect. PMCF studies generate clinical data. But that data often sits in a separate report. It does not inform complaint analysis. It does not update risk management. It does not trigger changes in instructions for use or training materials.

Common Deficiency
PMCF results documented in a clinical evaluation report update, but no connection established between PMCF findings and PMS trending, complaint handling protocols, or field safety corrective actions.

The PMS plan should explicitly describe how PMCF data is integrated. When does PMCF trigger a review of PMS data? When does PMS data influence PMCF study design? These are not theoretical questions. They reflect how a real surveillance system operates.

3. The Accountability Vacuum

Most PMS plans assign general responsibility. Clinical affairs oversees clinical evaluation. Quality handles complaints. Regulatory monitors vigilance. But nobody owns the integration. Nobody is responsible for ensuring that the system works as a system.

This is a structural problem. The PMS plan should identify who reviews aggregated data. Who decides when a trend becomes a signal. Who triggers clinical evaluation updates based on post-market findings. Without clear accountability, the plan becomes a set of parallel activities that never converge.

I have seen plans that required monthly PMS review meetings. Good idea. But the meeting minutes showed no decisions. No action items. Just a review of numbers. That is not surveillance. That is documentation theater.

What a Functional PMS Plan Actually Contains

A functional PMS plan connects surveillance activities to clinical decision-making. Here is what that looks like in practice.

Defined Data Sources and Clinical Rationale

The plan specifies what data is collected and why it matters clinically. Complaint data captures real-world failure modes. Vigilance data reveals serious risks. PMCF studies validate clinical performance claims. Literature review detects emerging evidence that challenges current benefit-risk conclusions.

Each source has a purpose. Each feeds a specific part of clinical evaluation.

Analysis Methods That Generate Clinical Insight

The plan describes how data is analyzed. Not just collected. Complaint trends are evaluated against known risks. Adverse event rates are compared to anticipated residual risks. PMCF outcomes are assessed against clinical performance endpoints. Literature findings are screened for relevance to device safety and performance.

These methods should be explicit. Not vague statements about “ongoing review.”

Triggers for Action

The plan establishes thresholds. When does a complaint trend require investigation? When does a literature finding trigger a clinical evaluation update? When do PMCF results necessitate a change in risk management?

These triggers create accountability. They transform surveillance from passive monitoring into active risk management.

Feedback Loops to Clinical Evaluation

The plan maps how post-market data updates clinical evaluation. It should reference the Clinical Evaluation Report and explain how PMS findings feed into periodic CER updates. It should link to the Post-Market Clinical Follow-Up Evaluation Report and show how PMCF results influence benefit-risk conclusions.

This is not administrative linkage. This is clinical integration.

Key Insight
If your PMS plan does not describe how post-market data changes clinical conclusions, then it is not a surveillance system. It is a data collection exercise.

Why This Matters More Than You Think

The PMS plan is not reviewed once during initial certification and then forgotten. It is a living document. Notified Bodies assess it during surveillance audits. They check whether the plan is followed. Whether data is collected as described. Whether clinical evaluation is updated based on post-market findings.

If your PMS plan is weak, every audit becomes difficult. You cannot demonstrate proactive surveillance. You cannot show that clinical evaluation reflects current evidence. You cannot prove that your device continues to meet MDR requirements for safety and performance.

More importantly, a weak PMS plan creates real clinical risk. You miss signals. You fail to detect performance issues before they escalate. You operate on outdated clinical assumptions while the real-world evidence tells a different story.

This is not about passing audits. This is about maintaining device safety over time.

What Happens Next

When you write or revise your PMS plan, ask yourself one question: If a serious safety issue emerged tomorrow, would this plan detect it?

If the answer is uncertain, your plan is not systematic. It is not proactive. It does not meet MDR Article 83.

Fix the integration first. Connect surveillance activities to clinical decision-making. Define who owns the process. Establish triggers that force action. Map the feedback loops that keep clinical evaluation current.

The rest will follow. Because a PMS plan that actually works looks nothing like a compliance template. It looks like a system designed to learn from real-world use and respond before risks become incidents.

In the next part of this series, we will look at what makes vigilance reporting under MDR different from what most manufacturers expect. The gap is wider than you think.

Peace,
Hatem
Clinical Evaluation Expert for Medical Devices
Follow me for more insights and practical advice.

Frequently Asked Questions

What is a Clinical Evaluation Report (CER)?

A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.

How often should the CER be updated?

The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.

What causes CER rejection by Notified Bodies?

Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.

Which MDCG guidance documents are most relevant for clinical evaluation?

Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDR Article 83, MDCG 2020-7

Need Expert Help with Your Clinical Evaluation?

Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.

Peace, Hatem

Your Clinical Evaluation Partner

Follow me for more insights and practical advice.

References:
– Regulation (EU) 2017/745 (MDR) Article 83
– MDCG 2020-7 Post-Market Clinical Follow-up (PMCF) Plan Template

Deepen Your Knowledge

Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.