The FDA Group's Insider Newsletter

The FDA Group's Insider Newsletter

How to Write Better IQ, OQ, PQ Protocols

Most qualification protocols we review are too vague. Here's how to fix them.

The FDA Group's avatar
The FDA Group
May 01, 2026
∙ Paid

We review a lot of CQV protocols. Maybe hundreds or more each year across pharma, medical device, biotech, and combination product companies. Some are written in-house, some by consultants. Some are legacy documents that have been copied and pasted forward for a decade, with the equipment name swapped out. (That’s usually the worst case.)

The uncomfortable truth is that many of them would not survive serious regulatory scrutiny. They’d pass a casual internal review because they look right and sort of work well enough: correct section headers, references to manufacturer specs, a signature block at the end. But the substance, the part that actually protects the company during an audit or an investigation, is often thin in ways that aren’t always obvious.

The people writing these protocols are usually experienced and capable. The problem we typically see is more structural. Organizations treat protocol writing as a documentation exercise when it should be treated as an engineering exercise. The distinction matters in practice.

We spoke with a few of our best CQV engineering-focused consultants to see which problems they see most often with protocols and the fixes they prescribe.

The most common problem: acceptance criteria that don’t accept or reject anything

If we had to pick one issue that shows up more than any other, it’s acceptance criteria written so broadly that virtually any result would pass.

Here’s what we mean in practical terms. Consider this IQ acceptance criterion, which we’ve seen in various forms dozens of times:

"Equipment shall be installed per manufacturer's specifications."

On its face, this seems reasonable to many people. But as an acceptance criterion, it's actually useless.

  • What specifically needs to be verified?

  • What measurement confirms compliance?

If an auditor asks, "How did you determine that this criterion was met?" the answer can't be "We followed the instructions!" That's a description of the activity, not evidence of the outcome.

A better version would be:

“Centrifuge rotor speed reaches 15,000 rpm ± 100 rpm under no-load conditions, per Section 4.2 of the manufacturer’s installation manual (Doc. No. XYZ-001, Rev. B).”

Again, the difference is that the second version can actually fail. Someone can run the test, record a number, and that number either falls within the range or it doesn’t. The first version can’t fail because it doesn’t define what failure looks like.

This pattern repeats throughout OQ and PQ protocols as well. We regularly see OQ criteria like “equipment operates within normal parameters” without defining what those parameters are. Or PQ criteria like “product meets quality specifications” without referencing which specifications, what the limits are, or how many samples need to pass.

When acceptance criteria are vague, passing them pretty much proves nothing. And when they prove nothing, the entire qualification exercise becomes a paperwork drill rather than an engineering verification. It’s not doing your future self any favors.

The copy-forward problem

This is a related issue. An official term for it might be “protocol inheritance.” Here’s what it looks like:

  • Company A writes a reasonable protocol for a specific piece of equipment in 2016.

  • Over time, that protocol becomes a template.

  • New equipment gets qualified using the same document structure, the same boilerplate language, and often the same acceptance criteria, even when the new equipment has different operating characteristics, different risk profiles, and different user requirements.

We’ve seen cases where a protocol written for a standalone analytical instrument was adapted for a fully integrated manufacturing line with few changes beyond the equipment name and serial number. The OQ section still tested the same ten parameters from the original protocol. No one asked whether those parameters were the right ones for the new equipment.

The fix is straightforward but time-consuming (which is why it’s often skipped): every protocol should start from the user requirements and work forward, not from a previous protocol and work backward. Templates save time on formatting and structure. They should really never dictate a piece of technical content.

OQ protocols that test the middle but skip the edges

This one is specific to Operational Qualification, and it’s common even in organizations with mature quality systems. Let’s walk through it.

Most OQ protocols test equipment at nominal operating conditions.

  • The tablet press runs at its standard speed.

  • The autoclave cycles at its standard temperature and time.

  • The mixing vessel operates at its standard RPM.

Everything passes, the report gets signed, and the equipment moves to PQ.

The problem is that production doesn’t always happen at nominal conditions. Batch sizes vary, operators adjust speeds, and environmental conditions shift with the seasons. If the equipment was only tested at the midpoint of its operating range, nobody actually knows whether it performs acceptably at the edges.

Good OQ protocols test at the extremes of the operating range, not just the center. For a tablet press, that means running at both the lowest and highest speed settings and confirming that tablet weight and hardness remain within specification across the full range. For temperature-controlled equipment, it means testing at the upper and lower bounds of the operating window, not just the setpoint.

In short, if the equipment can’t perform at the boundaries of its approved operating range, the operating range is wrong. It’s always better to find that out during OQ than during a production run.

No failure mode testing

There’s one question we ask during protocol reviews that almost always gets an uncomfortable pause:

“Where in this OQ protocol do you intentionally cause the equipment to fail?”

Many protocols don’t include failure mode testing at all. Everything in the test plan is designed to pass. The equipment runs through its normal operations, each parameter meets its criterion, and the protocol concludes successfully.

But regulators care whether the equipment responds correctly when something goes wrong.

  • Does the alarm trigger when the temperature exceeds the upper limit?

  • Does the system lock out when an unauthorized user attempts to change a critical parameter?

  • Does the error log capture the event accurately?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 The FDA Group, LLC · Publisher Terms
Substack · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture