15 Audit Questions That Could Save You From an FDA Warning Letter (Part 3)
In our final installment, we explore five overlooked—but frequently cited—compliance risks drawn from recent FDA enforcement actions.
In the first two parts of this series (1 and 2), we laid out ten essential audit questions inspired by recent FDA warning letters, examining issues ranging from inadequate process validation to weak supplier qualification. Each question was built around real enforcement findings—and each aimed to help QA and compliance leaders interrogate their own systems before the FDA does it for them.
Now, in Part 3, we’re wrapping up the series with the final five questions. These focus on areas that may not always get top billing on internal audit checklists—but they should. We’re talking about the rigor of OOS investigations, the completeness of adverse event reporting systems, the integrity of product testing programs, the control of water systems, and the depth of your training program.
Every one of these has surfaced in recent warning letters. And every one continues to create preventable risk.
Talk to us so we can help you catch and remediate them during an audit, rather than during an actual inspection. We provide audit and mock inspection support for 17 of the top 25 life science companies.
11. Do you thoroughly investigate any unexplained discrepancy or OOS result, with appropriate root cause analysis and corrective actions?
FDA cited Annovex Pharma for failing to investigate OOS results and unexplained discrepancies, even when the batch had already been distributed. That’s a serious breakdown, but it’s also not uncommon.
We continue to see investigation practices that fail to hold up to scrutiny in our audit work:
“Lab error” cited without supporting evidence.
OOS results invalidated without documented justification.
Manufacturing-related causes aren’t explored.
No assessment of potential impact on other batches or products.
CAPAs that address symptoms, not root causes.
No follow-up to confirm CAPA effectiveness.
We recommend going beyond surface-level problem-solving. A strong investigation process includes structured root cause analysis tools—such as the 5 Whys, Fishbone diagrams, and fault trees—along with hypothesis testing, correlation of related data (like historical batch trends), and clear traceability to product disposition decisions.
Also, make sure you’ve established escalation triggers for recurring issues or high-risk deviations—and that the timelines are being enforced, with documented justifications for any delays. Delayed investigations are always a red flag. Escalation should occur automatically if OOS is associated with high-risk product categories or recurring issues.
Maybe most importantly: treat every investigation as an opportunity to prove control, not just close an event. Many FDA investigators look for this specifically.
A few other things to keep in mind here:
Require hypothesis testing and data correlation. We’ll say it louder for those in the back: It's not enough to say “lab error.” The FDA expects you to conduct actual investigations. Those should include a testable hypothesis (e.g., sample prep error), supported by repeatability, equipment logs, analyst training, and related data sets (e.g., historical trends, other lots).
Link your OOS investigations to the lot disposition logic. You should be clearly tying your investigation outcomes to batch disposition decisions, and making QA is documentinh why the batch is safe to release (or not). (We often find that this linkage is vague or missing entirely.)
Track CAPA effectiveness post-implementation! This should be obvious, but we see it missing all the time. You should be running structured follow-ups (30-, 60-, 90-day reviews) on CAPAs tied to investigations. This ensures effectiveness, not just “action.”
12. Do you have adequate procedures for reporting adverse events to regulatory authorities, including clear definitions, timelines, and investigation requirements?
Annovex also failed to build a compliant AE reporting program. Procedures lacked basic definitions, timelines, and investigation steps, and didn’t specify what information was needed to evaluate events properly.
This type of gap often goes unnoticed in QA audits, yet it has a high impact. We’ve seen similar breakdowns across client sites:
Undefined reporting timelines (especially for serious, unexpected events).
Missing definitions of reportable events.
No investigation workflow for incoming AEs.
Limited or no documentation of follow-up activities.
Disconnected functions—customer service, regulatory, and QA not talking.
No trending of AE data to detect product signals.
Effective (and compliant) systems do more than capture events—they prioritize them, route them quickly, and analyze them as a whole. One key best practice: define internal response timelines, not just regulatory deadlines. A 15-day FDA window means nothing if your internal triage takes 10 days to get the event to the right team.
Cross-train your customer-facing teams to recognize AE language and escalate immediately. And if you aren’t trending AE data against complaint or batch information, you’re missing early warnings that could protect patients—and your brand!
The most mature companies build impressive end-to-end signal detection workflows. These don’t just log adverse events—they trend them. Signal detection algorithms or manual thresholds (e.g., 3 similar events in 30 days) trigger escalations, trending reviews, or labeling updates.
Also, we tend to see that systems define external reporting timelines, but not internal ones. Define your maximum time (e.g., 24–48 hours) for AEs to move from intake to PV teams for triage, especially for serious/unexpected cases.
13. Do you perform all required testing on finished drug products—including identity, strength, quality, and purity—before release?
Both Yangzhou Sion and Hangzhou Glamcos failed to perform key finished product testing before releasing batches. One didn’t conduct assays for active strength. The other didn’t test for quality or purity. The products went out anyway.
This is one of those areas where shortcuts tend to accumulate slowly—especially when teams rely heavily on in-process testing. But that’s not enough.