The FDA Group's Insider Newsletter

The FDA Group's Insider Newsletter

Share this post

The FDA Group's Insider Newsletter
The FDA Group's Insider Newsletter
FDA Warning Letter Breakdown: Four Letters Reveal Cascading Control Failures Across Device Firms
Copy link
Facebook
Email
Notes
More

FDA Warning Letter Breakdown: Four Letters Reveal Cascading Control Failures Across Device Firms

The FDA's CDRH recently warned four medical device makers about a range of failures in procedures and processes.

The FDA Group's avatar
The FDA Group
May 30, 2025
∙ Paid
1

Share this post

The FDA Group's Insider Newsletter
The FDA Group's Insider Newsletter
FDA Warning Letter Breakdown: Four Letters Reveal Cascading Control Failures Across Device Firms
Copy link
Facebook
Email
Notes
More
1
Share

This breakdown is available for paid subscribers. Only paid subscribers get regular full access to our breakdowns and other analyses. If you’re not already a paid subscriber, you can upgrade here. Want to stay out of our warning letter breakdowns? Contact us to access our global network of 3,250+ consultants and 250+ former FDA employees. We run audits, mock inspections, and remediation for 17 of the top 25 life science firms.

A recent cluster of FDA warning letters issued in May 2025 reveals troubling patterns in quality system breakdowns that should concern any medical device manufacturer.

Read them here, here, here, and here.

These letters—sent to a German blood collection device company, a U.S. VR medical assessment manufacturer, an ophthalmic device contract manufacturer, and a hemostatic device developer—are all case studies in how fundamental quality system failures create cascading risks across interconnected processes.

What makes these cases particularly instructive is the domino effect you see with each of them when core quality systems collapse.

  • When process validation fails, equipment qualification suffers.

  • When design controls break down, CAPA systems become overwhelmed.

  • When supplier management is inconsistent, nonconforming product control becomes nearly impossible to maintain effectively.

We pulled out the key patterns emerging from these enforcement actions and extracted a few actionable lessons for preventing similar systemic failures based on what we typically recommend and help remediate in our own device audits and mock inspection projects.

Process validation failures

The most striking pattern across these warning letters is the complete breakdown of process validation—not just poor validation, but decades-long operation without any validation whatsoever.

We wrote about this problem at length with one of these firms here:

FDA Warning Letter Breakdown: A Quality System Collapses at a Blood Collection Device Manufacturer

FDA Warning Letter Breakdown: A Quality System Collapses at a Blood Collection Device Manufacturer

The FDA Group
·
May 22
Read full story

The blood collection device manufacturer operated critical coating machines for 30 years without validation. When questioned during inspection, quality management stated they were "still trying to figure [out] how to retrospectively validate them." Even more concerning, software used to inspect coated components was never validated and actively accepted out-of-specification products during the FDA's demonstration.

The ophthalmic contract manufacturer faced similar validation disasters. They released multiple sterile device lots without conducting revalidation when switching to a new contract sterilizer. Management's justification was that they "did not believe that a revalidation was required because the new contract sterilizer was able to sterilize the product with the specified dose mapping." This is a core misunderstanding of validation requirements—and it led to products being distributed without proper process controls.

The VR device manufacturer failed to validate its seal integrity inspection method for sterile pouches containing surgical components despite having written procedures that required such validation. Without validation, there was no assurance that manufacturing employees could "repeatably and reproducibly detect conforming and nonconforming seal pouches with a high degree of assurance."

This should go without saying: Process validation isn't regulatory paperwork—it's your proof that manufacturing processes can consistently produce safe, effective devices. Operating without validation means gambling with patient safety while creating enormous liability exposure.

In our audit work for device firms, we sometimes find validation programs that look compliant on paper but fail under scrutiny.

  • A common pattern: companies validate initial process parameters but never revalidate when they make "minor" equipment modifications, software updates, or facility changes that cumulatively invalidate their original validation.

  • We also see validation protocols that test whether equipment can run, but not whether it can reliably detect and reject defective products—a critical distinction that becomes obvious during FDA demonstrations.

Ask yourself:

  • Can you produce validation documentation for every machine that's been in use for more than 5 years, or do you have written justification for why validation isn't required?

  • If you demonstrated your inspection software to an FDA investigator tomorrow, would it correctly reject out-of-specification products every time?

  • When you switch contract service providers (sterilizers, testing labs), do you have a documented decision tree that determines when revalidation is required vs. when it's not?

  • For any process that directly affects product safety or efficacy, can you statistically prove it's under control, or are you relying on "it's always worked" as evidence?

  • Have you revalidated processes after cumulative "minor" changes (software patches, equipment modifications, facility moves) that might have invalidated your original validation baseline?

Equipment qualification failures create product defects

The blood collection device company's CAPA system documented machine failures that resulted in damaged capillaries in finished products across multiple batches. The root cause was identified as equipment failure, yet these machines had never been qualified for their intended use. Management confirmed that validation plans were "still in process and have not been executed."

It’s a textbook example of how equipment qualification failures create a destructive cycle: unqualified equipment leads to product defects, which trigger customer complaints, which consume resources in firefighting mode rather than preventing problems systematically.

To be clear, equipment qualification provides assurance that manufacturing equipment can reliably produce conforming products. When equipment isn't qualified, you're essentially operating a manufacturing process with no documented evidence it can perform as intended.

Our mock inspections pretty consistently reveal that companies qualify equipment for basic operational parameters but too often miss the qualification of critical performance characteristics that directly impact product quality.

We've seen qualification protocols that verify a machine "turns on and runs" but don't test whether it can consistently produce products within specification limits. We’ve also seen some teams qualify individual pieces of equipment but never qualify the integrated line or system performance, creating gaps when equipment interactions affect product quality.

Ask yourself:

  • If a machine failure resulted in product defects reaching customers, could you demonstrate that the machine was properly qualified for its intended use before the failure?

  • Have you qualified integrated line performance, or only individual equipment pieces that might interact in ways that affect product quality?

  • Do you have qualification protocols that address all critical performance parameters that affect product quality, not just basic operational requirements?

  • When equipment failures trigger CAPAs, do you automatically reassess qualification status as part of your investigation, or do you focus only on the immediate repair?

  • Can you prove that backup equipment can produce equivalent quality products, or would switching to backup equipment essentially require starting qualification from scratch?

Design control system collapses

The blood collection device manufacturer couldn't produce design history files for its products. When investigators requested initial design documents addressing original design input, output, verification, validation, and design review records, quality management stated the firm "may not have maintained such documents." Its existing procedures consisted of basic flowcharts and change lists that failed to describe how design changes were managed to ensure product quality remained unaffected.

The VR device manufacturer demonstrated multiple design control failures, including:

  • Approving test reports that didn't meet design requirements (eye tracking accuracy was specified at one level but tested results showed significantly different performance).

  • Failing to verify that the software correctly warned users about low-quality data.

  • Maintaining outdated design input requirements that referenced discontinued device models.

  • Implementing design changes without proper verification testing.

In our audits, we sometimes see design history files telling a story of compliance rather than actual design development.

Some specific problems we find more than we should:

  • DHFs constructed retrospectively to satisfy regulatory requirements.

  • Test reports that are approved despite not meeting stated requirements (with no documented rationale).

  • Design verification testing that tests "similar" parameters rather than the exact requirements stated in design inputs.

We also find that companies often fail to assess cumulative design changes—each individual change seems minor, but collectively they can fundamentally alter device performance.

Ask yourself:

  • If an FDA investigator asked for your design history files right now, could you produce complete documentation within 30 minutes that shows why every design decision was made?

  • Have you assessed whether cumulative design changes since your last 510(k) clearance might require a new submission, or do you evaluate changes only individually?

  • Do your design verification tests actually test the specific requirements stated in your design inputs, or do they test "close enough" parameters?

  • When you approve test reports that don't meet specifications, is there documented rationale for why the device is still safe and effective?

  • Can you trace every manufacturing procedure back to an approved design output, or have processes "drifted" over time without design control oversight?

CAPA

The blood collection device company's CAPA procedure failed to adequately describe root cause investigation methods, effectiveness verification, and statistical trend analysis. Specific CAPAs were closed without establishing adequate corrective actions, failed to reference training evidence, and didn't perform complete root cause analyses.

The VR device manufacturer had a CAPA opened in October 2021 to address higher than expected assessment scores, with corrective action stating that "the raw data capture implementation of the eye tracker software must be updated." During the 2025 inspection, this software correction still had not been implemented—representing a three-and-a-half-year delay in addressing a known product performance issue.

So many CAPA systems suffer from what we call "investigation fatigue"—teams stop at the first plausible root cause rather than rigorously testing alternative hypotheses. CAPAs that are closed based on implementing procedures or training rather than demonstrating measurable improvement in the actual problem.

Another pattern: companies that investigate individual issues thoroughly but fail to connect similar problems across different products, time periods, or suppliers that would reveal systemic issues.

Ask yourself:

  • Do you have CAPAs that have been open for more than 12 months, and if so, can you explain why the corrective action is taking longer than a full product development cycle?

  • When you "verify CAPA effectiveness," do you have predefined statistical acceptance criteria, or do you rely on subjective assessment months later?

  • If the same type of problem occurred at three different suppliers or in three different product lines, would your CAPA system automatically flag this as a systemic issue requiring investigation?

  • Do your CAPA investigations include at least three alternative root cause hypotheses before settling on one, or do you stop at the first plausible explanation?

Nonconforming product control and release failures

One of the most alarming patterns was the systematic release of nonconforming products, including products that were never actually sterilized but released as sterile.

The ophthalmic contract manufacturer released 120 boxes of intravitreal injection kits as sterile without ever sending them for sterilization. They went even farther— fabricating sterilization certificates of processing "as if they originated from the customer-approved contract sterilizer." This resulted in a Class II recall by the manufacturer, followed by a Class I recall by their customer.

The blood collection device company failed to document disposition decisions for nonconforming products according to their own procedures, creating a risk that defective devices could reach customers. If you cannot properly control nonconforming product, you risk shipping defective devices to customers.

We frequently see that companies have robust procedures for handling obvious nonconformances but lack a good approache for "borderline" cases—patterns where individual disposition decisions seem reasonable, but no one is actually tracking whether multiple borderline releases create cumulative risk.

Another common finding: companies that properly segregate nonconforming products physically but have inadequate systems to prevent them from accidentally entering the distribution chain during busy periods or shift changes.

Ask yourself:

  • Can you account for the disposition of every nonconforming product identified in the past 12 months, including who made the decision and what documentation supports it?

  • If you found nonconforming products in your warehouse today, do you have foolproof systems to prevent them from accidentally being shipped to customers?

  • When products don't meet specifications, do you have documented criteria that determine when they can be released with justification vs. when they must be reworked or scrapped?

  • Do your disposition decisions consider cumulative effects—could releasing multiple "borderline acceptable" products create patient safety risks even if each individual decision seems reasonable?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 The FDA Group, LLC
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Copy link
Facebook
Email
Notes
More