The SSCP trap: what gets Class III submissions rejected first
I reviewed an SSCP last month where the manufacturer spent pages explaining their device design. The Notified Body stopped reading at page three. The document was fundamentally flawed before the technical discussion even started. For Class III devices and implantables, SSCP structure failures account for more initial rejections than inadequate data.
This is not about missing a data point. It is about misunderstanding what the SSCP is supposed to communicate and to whom.
When reviewers open an SSCP for a high-risk device, they are not looking for reassurance. They are looking for risk signals that were either missed or mismanaged in your clinical development. The structure of your document, the logic of your argumentation, and the completeness of your risk contextualization reveal more about your clinical thinking than the studies themselves.
Most manufacturers write the SSCP as a summary report. Reviewers read it as an evaluation of your clinical judgment.
What the SSCP is actually for
The Summary of Safety and Clinical Performance is not an internal document. It is a public-facing transparency tool mandated by MDR Article 32 and detailed in MDCG 2019-9. It exists to give healthcare professionals, patients, and competent authorities access to safety and performance conclusions.
But before publication, it passes through Notified Body review. And that review is not a courtesy check.
The Notified Body assesses whether your SSCP reflects a coherent understanding of the clinical evidence, the residual risks, and the benefit-risk profile. If the structure or content suggests you misunderstood the clinical evaluation process, the SSCP gets flagged immediately.
For Class III and implantables, this means the bar is set higher. Reviewers know that devices in these categories carry significant residual risks. They expect the SSCP to demonstrate that you have systematically identified, quantified, and contextualized those risks within the intended clinical use.
The SSCP is not a marketing document. It is not even primarily a patient information tool at the review stage. It is a regulatory assessment of whether your clinical evaluation conclusions are appropriately qualified and transparently communicated.
What gets rejected in the first three pages
Reviewers do not wait until page twenty to form an opinion. They scan the opening sections for structural coherence. If the foundation is weak, they stop reading and issue a deficiency letter.
Here is what triggers immediate rejection:
Missing or vague device identification
If the device description does not clearly define the intended use, the patient population, the clinical indication, and the operational context, the reviewer cannot assess whether the clinical data is relevant. This is especially critical for implantables where anatomical site, duration of implantation, and interaction with tissue define the risk profile.
I have seen SSCPs that describe a cardiovascular implant without specifying whether it is intended for left or right heart use. That distinction changes the entire risk and performance evaluation. If it is not clear upfront, the SSCP is fundamentally unusable.
No explicit link between residual risks and clinical evidence
The SSCP must show that every identified residual risk in the risk management file has been addressed in the clinical evaluation. Reviewers cross-reference your risk table with your clinical data sections. If residual risks are listed but not linked to specific clinical findings or PMCF commitments, the document is incomplete.
For Class III devices, this is not optional. The reviewer expects a structured argument that maps each significant residual risk to the evidence that characterizes its probability and severity in real use.
Manufacturers list residual risks in a table without explaining how clinical data supports their acceptability. The SSCP reads like a summary of the CER rather than an integrated safety and performance argument.
Generic benefit-risk conclusions
Reviewers reject SSCPs that conclude with statements like
Frequently Asked Questions
What is a Clinical Evaluation Report (CER)?
A CER is a mandatory document under MDR 2017/745 that demonstrates the safety and performance of a medical device through systematic analysis of clinical data. It must be updated throughout the device lifecycle based on PMCF findings.
How often should the CER be updated?
The CER should be updated whenever significant new clinical data becomes available, after PMCF activities, when there are changes to the device or intended purpose, and at minimum during annual reviews as part of post-market surveillance.
What causes CER rejection by Notified Bodies?
Common reasons include inadequate equivalence demonstration, insufficient clinical data for claims, poorly structured SOTA analysis, missing gap analysis, and lack of clear benefit-risk determination. Structure and logical flow are as important as the data itself.
Which MDCG guidance documents are most relevant for clinical evaluation?
Key documents include MDCG 2020-5 (Equivalence), MDCG 2020-6 (Sufficient Clinical Evidence), MDCG 2020-13 (CEAR Template), MDCG 2020-7 (PMCF Plan), and MDCG 2020-8 (PMCF Evaluation Report). MDCG 2019-9, MDR Article 32
Need Expert Help with Your Clinical Evaluation?
Get personalized guidance on MDR compliance, CER writing, and Notified Body preparation.
✌
Peace, Hatem
Your Clinical Evaluation Partner
Follow me for more insights and practical advice.
Deepen Your Knowledge
Read Complete Guide to Clinical Evaluation under EU MDR for a comprehensive overview of clinical evaluation under EU MDR 2017/745.





