The FDA Group's Insider Newsletter

The FDA Group's Insider Newsletter

What Our Auditors Are Finding Lately: 8 Trends Across GMP, GLP, ISO, and CSV Audits (Q1 2026)

A look back at our recent audit reports, supplier qualifications, mock inspections, and internal assessments.

The FDA Group's avatar
The FDA Group
Apr 09, 2026
∙ Paid

A quick heads up before you dive in: This is a long, in-depth post. Some email providers (including Gmail) may clip it in your inbox. If that happens, just click “View entire message” or “Read on Substack” to see the full article.


Once a quarter, we publish an analysis of the trends we’re seeing in our audit work for clients, so (if you’re a paid subscriber) you can learn and act proactively on what we’re finding across the industry. In this issue, we’re looking at audit reports finalized in Q1 of 2026. Browse past reports here and here.

If you’re a free subscriber, we’ll be cutting off the findings list and a list of questions every firm should be able to answer with documentary evidence at hand. Graduate to a paid subscription here.

Over the first quarter of 2026, our auditors wrapped up a diverse set of engagements spanning sterile pharmaceutical manufacturing, biologics and ADC production, contract testing laboratories, GLP oversight, clinical packaging and storage, medical device contract manufacturing, and computer systems assurance programs, all across five countries.

What struck us this quarter (as it has in the past) wasn’t the severity of individual findings. It was how consistently the same categories of deficiency showed up across completely different facility types, regulatory frameworks, and geographies.

A sterile ophthalmic manufacturer in India, a biologics CDMO in China, and a computer systems assurance program at a U.S. biotech all struggled with the same fundamental challenge: making sure that the documentation presented for inspection accurately reflects what the underlying systems actually contain.

This report synthesizes those findings into practical guidance. We’ve anonymized every example, but we’ve preserved enough operational context that you should be able to see yourself (or your suppliers, or your CMOs) in these scenarios.

Talk to us if you need auditing, mock inspection, remediation, or other RA/QA/Clinical support.

What’s in this dataset

This trends report draws from nine distinct audit engagements completed (or with reports finalized) during January through March 2026. The set includes:

  • Routine GMP Vendor Audits (Contract Labs and CDMOs): On-site audits of contract testing laboratories and sterile manufacturing CDMOs in the United States, evaluated against 21 CFR Parts 11, 210, 211, and 820 for analytical services and Blow-Fill-Seal sterile ophthalmic manufacturing.

  • Supplier Qualification Audits (Sterile Pharma): First-time qualification audits at sterile injectable and ophthalmic manufacturing sites in India, focusing on aseptic processing, quality management systems, and supply chain oversight under 21 CFR Parts 11, 200, 210, and 211.

  • Biologics/ADC CDMO Qualification Audits: On-site qualification audits at biologics and cell bank manufacturing facilities in China providing ADC drug substance, drug product, and cell bank services, assessed against US FDA 21 CFR Parts 11, 210, and 211 as well as EU EMA EudraLex Vol. 4 and associated Annexes.

  • GLP Oversight Audits: Remote audits evaluating sponsor oversight of GLP activities delegated to contract research organizations, assessed against 21 CFR Parts 11 and 58.

  • Computer Systems Assurance (CSA) Internal Audits: Remote internal audits of CSA programs at U.S. biotechs, evaluated against 21 CFR Part 11, EU GMP Annex 11, and ICH E6(R3).

  • Medical Device Mock FDA Inspections: Remote mock inspections of contract manufacturers producing non-sterile medical devices, assessed against 21 CFR Part 820, ISO 13485:2016, and EU MDR 2017/745.

  • Routine Monitoring Audits (Clinical Packaging): On-site monitoring audits of clinical and commercial packaging and storage operations at U.S. facilities, evaluated against 21 CFR Parts 11, 210, and 211.

Again, no company or site names are used in the trend descriptions below, and we haven’t added any facts beyond those in the reports. Confidentiality is paramount to us.

Stats and trends at a glance

The dataset:

  • 9 audits analyzed across January–March 2026

  • 5 countries: United States, India, China, Malaysia, Switzerland (remote)

  • Audit types include GMP vendor qualification, supplier qualification, biologics CDMO qualification, GLP oversight, internal CSA audit, medical device mock inspection and routine monitoring.

  • Regulatory frameworks include 21 CFR 210/211, 21 CFR 820, 21 CFR 58, 21 CFR Part 11, EU GMP Annex 11, EudraLex Vol. 4, ISO 13485, EU MDR 2017/745, and ICH E6(R3).

Finding severity breakdown:

  • Critical findings: 0 (0%)

  • Major findings: 10 (20%)

  • Minor findings: 20 (40%)

  • Recommendations: 20 (40%)

Top finding categories (by % of audits affected):

  • Documentation, data integrity, and inspection readiness — 78% (7 of 9 audits)

  • Environmental monitoring and contamination control strategy — 44% (4 of 9)

  • Aseptic process simulation/media fill deficiencies — 33% (3 of 9)

  • Change control and procedural governance — 33% (3 of 9)

  • Vendor and supply chain oversight — 33% (3 of 9)

  • Training and personnel competency — 22% (2 of 9)

  • Facility upkeep and equipment traceability — 22% (2 of 9)

  • Batch record and production documentation — 22% (2 of 9)

Repeat findings:

  • One audit elevated a Minor finding to Major specifically because it was a repeat finding from the prior audit cycle that had not been corrected.

  • Root cause: CAPA from the prior audit was either not implemented or not verified for effectiveness.

Geographic performance:

  • United States: Strongest outcomes overall. Four U.S.-based or U.S.-focused audits produced low finding counts and no Major findings related to fundamental system failures. Established CDMOs and contract labs demonstrated mature quality systems. Findings were primarily recommendation-level procedural refinements.

  • China: Highest risk. Two biologics/ADC qualification audits at Chinese facilities accounted for 6 of the 10 Major findings (60%) and 19 of 50 total findings (38%). Deficiencies concentrated in clean room design and qualification, environmental monitoring, aseptic process simulation, and cross-contamination control strategy. One facility was assessed as high-risk requiring CAPA submission plus re-audit.

  • India: Moderate risk. The sterile ophthalmic manufacturer produced 10 findings including 2 Majors focused on sterilization assurance gaps in the primary packaging supply chain and inadequate buffer preparation controls. Other findings were recommendation-level.

  • Malaysia: The medical device mock inspection produced 1 Major and 4 Minor findings. Change control governance, documentation integrity, incoming material controls, training, and CAPA effectiveness were all flagged.

A few patterns of problems:

  • Initial supplier qualification audits at emerging-market sterile/biologics facilities carry the highest risk. The three qualification audits at facilities in China and India accounted for 29 of 50 total findings (58%) and 8 of 10 Majors (80%).

  • A PDF report rendering from electronic validation systems can create ALCOA+ gaps even when the underlying data is correct.

  • Contamination control strategy documentation is lagging behind EU Annex 1 (revised 2023) expectations at multiple facilities.

  • Repeat findings are being escalated in severity. Sponsors and auditors are treating unresolved prior observations as Major.

  • SOPs referencing obsolete system names or platforms create unnecessary procedural compliance failures.

What’s working:

  • Zero critical findings across the entire dataset.

  • One audit (GLP oversight) produced zero findings of any kind (clean across all categories).

  • Strong contamination control strategies at established U.S. sterile CDMOs.

  • Mature data integrity governance and audit trail review programs at validated electronic systems.

  • Well-structured validation lifecycle management platforms demonstrating scalable risk-based approaches to computer systems assurance.

  • Open, responsive engagement from audit hosts at closing meetings.

Below, we expand on the themes we actually saw in the reports. For each, you’ll find: what we observed, how often, representative examples (generalized to protect identity), and prescriptive actions you can take.

1. Documentation, data integrity, and inspection readiness were the biggest gaps (again)

This cluster of problems sits at the top of the list every time we trend this data. This time, it was a problem in 7 of the 9 audits. If there’s one consistent message from the issues we see here, it’s that the gap between “the data is correct” and “the data is inspection-ready” remains wider than most teams realize.

This came up all over the place: sterile manufacturers, biologics CDMOs, contract testing labs, and even at a mature U.S. biotech with an otherwise well-designed validation lifecycle management program.

And to be clear, the underlying systems of record were generally sound. What broke down was the layer between those systems and the artifacts an auditor or inspector actually reviews. The printed PDFs, the exported spreadsheets, the attached printouts in batch records, and the SOPs that describe how all of this is supposed to work.

The specific failures took different forms depending on the facility, but they generally shared a common root cause: nobody had stress-tested whether what came out of the system would hold up under scrutiny, even when what was in the system was complete and accurate.

It’s the kind of gap that doesn’t surface as much during routine operations because internal users know the workarounds. It usually only becomes visible when someone external, like an auditor, an investigator, or a new customer, asks for evidence and discovers that the evidence has problems the facility had grown accustomed to ignoring.

What we found:

  • Truncated PDF outputs from electronic systems: At one facility, PDF exports from a validation lifecycle management platform consistently produced data tables that were cut off. The reports were generated in portrait format, but the tables within them were landscape, resulting in truncated and obscured data across all seven system periodic review reports provided during the audit. A traceability matrix had an entire column cut off. Executed test scripts had comments fields left blank instead of recording “N/A.” The underlying system of record showed the data as complete, but the rendered PDFs (the artifacts an inspector would actually review) failed ALCOA+ requirements. This was graded as Minor only because the system of record was intact, but the auditor explicitly noted that an inspector requesting only PDF exports would encounter incomplete evidence.

  • Unreviewed raw data provided as audit evidence: At the same facility, spreadsheet exports were provided to the auditor in raw form with no evidence of review or approval. One exported worksheet was so large it couldn’t even be previewed, displaying only a message that the sheet was “too large to be previewed” and directing the user to download the file. Providing unreviewed raw data as audit evidence, rather than reviewed and approved reports, creates both a data integrity risk and an inspection-readiness gap. (We’ve seen this before.)

  • Blank fields in batch records and production documentation: At a sterile drug manufacturer, assembly and packing batch records contained numerous blank fields within in-process inspection and quantity reconciliation sections, making it unclear whether activities were performed or were simply not applicable. At a medical device CMO we audited, the same pattern emerged: operators used a “–” symbol in unused fields, but its meaning wasn’t defined anywhere in the document control procedures. These are the kinds of findings that individually seem minor, but collectively paint a picture of documentation discipline that an investigator will likely notice.

  • Weighing slips without equipment traceability: At one sterile pharma site, weighing slips didn’t carry the balance identification number, and a weighing balance was not set up with LIMS despite site presentations claiming that all balances were integrated. Without a balance ID on the slip, the data isn’t fully attributable to a specific, calibrated piece of equipment (a basic ALCOA+ failure).

  • SOPs referencing obsolete systems: At a contract testing lab, a training SOP still referenced the legacy QMS by name, even though the electronic learning management system had been transitioned to a different platform. The training itself was being conducted correctly on the new system, but the SOP was wrong. It’s a procedural compliance failure that’s trivially easy for auditors and the FDA to document if they find it.

  • Batch record printouts not properly attached or initialed: At a sterile CDMO, air filter test result printouts were found loose and not appropriately attached to the executed batch record sections where they belonged. The printouts also weren’t initialed and dated across the attachment pages per Good Documentation Practice. The auditor flagged this as a potential data integrity issue: if these raw data printouts were ever lost, the batch record would be incomplete!

  • Outdated Site Master File layouts: At one biologics facility undergoing qualification, the floor layouts in the Site Master File were outdated, reflecting an earlier facility configuration rather than the current state.

Both auditors and actual FDA investigators routinely cross-reference what’s in the system of record against what appears in printed reports and batch records. A facility whose underlying data is good but whose outputs are truncated, incomplete, or improperly attributed creates the impression of weakness that can color an entire inspection outcome. Don’t give them strings to pull on!

A few recommendations:

  • Make sure every PDF rendering from your electronic systems (validation platforms, LIMS, ERP, document management) produces complete, legible outputs that meet ALCOA+ requirements. Don’t assume that because the system of record is correct, the PDF exports are too. Print every report format your system can generate, and check it visually if needed.

  • Establish a clear policy: raw data exports are never provided as audit or inspection evidence unless they have been formally reviewed and approved. All evidence packages should consist of approved reports or approved exports with documented review. This is basic good practice we see teams stumble on often.

  • Reinforce GDP training with emphasis on “why” rather than just “what.” Operators should understand that blank fields create traceability gaps, not just procedure violations. Where fields are not applicable, require “N/A” entries per your SOP and enforce it during batch record review.

  • Audit your SOP-to-system alignment at least annually. Every SOP that references a system, template, or platform by name should reference the current system. When systems are migrated, add SOP updates to the migration project plan, not as an afterthought.

2. Environmental monitoring and contamination control strategies showed broad vulnerability

The four audits that flagged EM/CCS issues accounted for most of the Major findings in the dataset. And while “only” four audits flagged EM or CCS issues, those four were concentrated in exactly the facility types where these controls matter most: sterile injectables, ophthalmic preparations, and biologics drug product manufacturing.

The pattern across these audits points to something a little bigger than isolated procedural lapses. EU Annex 1 was substantially revised in August 2023, raising expectations for formal contamination control strategy documentation, dynamic monitoring approaches, and risk-based justification of clean room design choices.

The facilities we audited here (particularly those undergoing initial qualification) hadn’t fully closed the gap between their existing programs and the revised expectations.

In some cases, the strategies existed but were incomplete. In others, the monitoring approaches predated the revised guidance and had not been updated. And in a few cases, core design choices had been made without the documented risk-based rationale that current expectations require.

That all boils down to facilities with otherwise functional aseptic operations, as documented and justified in design, even when their day-to-day execution looks reasonable.

What we found:

  • Incomplete contamination control strategy: At a biologics CDMO, the cross-contamination control strategy for the drug product area was not fully implemented. There was no documented justification for an atypical pressure-differential configuration in the filling area. The configurations were unusual and would require explicit risk-based justification, but none was available.

  • Clean room design not aligned with Annex 1: At a cell bank manufacturing facility, the liquid-filling clean room was constructed as Grade-A with a Grade-C background, not aligned with Annex 1's expectation for Grade-A within Grade-B for aseptic processing. Also:

    • The CCS strategy did not address airborne cross-contamination in both powder- and liquid-media filling areas.

    • Wall and ceiling joints were not fully sealed, leaving uncleanable crevices. There was no concept of bioburden or equivalent testing at the site.

    • EM monitoring stands in the sterility test lab were positioned above working height, making the monitoring results unrepresentative of the actual working environment.

  • Routine requalification gaps: At that same facility, routine requalification for Grade-A areas did not include tests for pressure differentials, viable particulate counts (VPC), clean-up rate, or air volume, and none of the testing that was conducted was performed under dynamic conditions. The requalification frequency for Grade-C was set to yearly, even though it supports Grade-A areas.

  • APQR missing water and EM trend reviews: At a sterile drug manufacturer, the Annual Product Quality Review did not address WFI test results for any of the batches produced during the review period. Trends concerning WFI, Purified Water, and Pure Steam were not reviewed in the APQR. Environmental monitoring trends were also absent. For preparations where WFI is a critical component, this was a significant gap in product quality trending.

  • Pure steam condensate not evaluated: At the same site, during the qualification of steam sterilizers, the pure steam condensate was not evaluated for compliance with WFI quality standards after sterilization. Because pure steam comes into direct contact with critical surfaces, its condensate must meet pharmacopoeial WFI requirements to ensure sterility assurance.

EU Annex 1 Section 4 now requires a formal, documented Contamination Control Strategy! Facilities that haven’t completed their CCS, or whose CCS doesn’t cover pressure differential rationale, clean room design justification, and EM approaches under dynamic conditions, are exposed.

  • If you haven’t already, complete a formal Contamination Control Strategy document that covers every element listed in EU Annex 1, Section 4. Don’t limit it to aseptic areas; include your supporting Grade-C and Grade-D environments, your clean room design rationale, and your pressure cascade justification.

  • Make sure your EM program includes dynamic monitoring. Static-only requalification data is insufficient for Grade-A areas per current Annex 1 expectations. Verify that monitoring locations, including EM stand heights, represent actual working conditions.

  • Incorporate WFI, Purified Water, Pure Steam, and EM trending into your APQR. If these aren’t explicitly required by your APQR template, revise the template now.

  • Validate pure steam condensate quality against WFI pharmacopoeial standards during sterilizer qualification. This should be part of your standard sterilizer qualification protocol.

3. Aseptic process simulation (media fill) deficiencies kept coming up

Media fill findings showed up at three different facilities across three different countries this quarter, which is worth pausing on.

The sites that struggled here ranged from a newly qualified cell bank operation to an established ophthalmic manufacturer to a biologics drug product CDMO. What they had in common was a willingness to treat aseptic process simulation as a procedural exercise rather than a genuine challenge of worst-case conditions.

This raises a question that every aseptic manufacturer should ask: Does our media fill program actually simulate the conditions, operators, interventions, and materials used in routine production, or does it simulate an idealized version of those things?

The specific deficiencies we found (operator participation gaps, smoke studies under static rather than dynamic conditions, container-closure systems that didn’t match production) all share the same underlying pattern. The simulation was easier or cleaner than reality, which means the simulation didn’t actually validate reality. Regulatory expectations on this point have been consistent for years, and the FDA knows how to look for the specific gaps that distinguish a rigorous APS from a procedural one. (Keep in mind they read this newsletter, too, so we’re bringing these things to both sides’ attention.)

What we found:

  • Operator participation gaps in media fills: At a cell bank manufacturing facility, not all qualified operators participated in the aseptic process simulation exercise, and not all routine interventions were included or challenged. Approved intervention times weren’t defined either. These kinds of gaps undermine the purpose of the media fill: demonstrating that all operators can perform all routine interventions within defined timeframes without compromising aseptic conditions.

  • Smoke study deficiencies: At the biologics CDMO, the smoke study for a filling isolator wasn’t conducted under dynamic conditions — only under static conditions during OQ (when not even all equipment was installed). The smoke generator handle was also too small, requiring the operator to enter the filling area to pump smoke. On top of that, the smoke concentration was low, and the flow pattern at the bottom of the filling area was unclear. We didn’t see any critical interventions challenged during the smoke study, and the timelines for interventions weren’t being clearly tracked for start and end times.

  • Container mismatch between media fill and routine production: At the sterile manufacturer, the media fill validation used primary packaging containers from one supplier, while routine production used containers from a different supplier. A mismatch here raises questions about the representativeness of the simulation: the container-closure system affects fill parameters, sealing characteristics, and handling during aseptic processing. Regulatory expectations generally require that media fill use the same type and source of containers used in routine production, with documented risk assessment and justification if they differ.

The FDA’s process validation guidance and EU Annex 1 both emphasize that aseptic process simulation should be representative of actual manufacturing conditions. Media fills that don’t include all qualified operators, all routine interventions, and the actual container-closure system used in production are vulnerable to challenge.

  • Require that every qualified aseptic operator participate in at least one media fill per campaign or per defined interval. Track participation and flag anyone who’s overdue.

  • Define and document your approved intervention types and maximum allowable intervention times in your aseptic processing SOP. Media fill protocols should explicitly list which interventions will be simulated and the acceptance criteria for timing.

  • Run your smoke studies under dynamic conditions with all equipment installed and operational. Document smoke concentration, flow patterns, and intervention challenges with video evidence.

  • Align your media fill container-closure system with routine production. If you have to use a different supplier’s containers, perform and document a formal risk assessment justifying the deviation and demonstrating material equivalence.

4. Change control and procedural governance gaps were consequential

Change control findings appeared across very different operational contexts this quarter in three of the nine audits: a medical device CMO, a clinical packaging facility, and a U.S. biotech’s internal computer systems assurance program.

But the underlying weaknesses, again, were strikingly similar. In each case, the change control framework existed and was generally followed, but the governance around the framework had gaps that allowed changes to slip through with insufficient rigor or to persist longer than intended. Again, something is written, but not actually operationalized.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 The FDA Group, LLC · Publisher Terms
Substack · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture