5 Underused 510(k) Practices That Make FDA Reviews Smoother, Faster, and More Predictable
Why experienced reviewers and consultants keep pushing teams to adopt these long before submission day.
We recently shared some insights from our consultant subject matter experts on CAPA best practices, and it was one of our most popular pieces in a while.
So today, we thought we’d provide a similar playbook for the device and medtech teams out there who have 510(k)s on the horizon, as it’s one of the most common regulatory submission pathways we help teams with.
We’ve worked on literally hundreds of 510(k)s (full builds, rescues, pre-sub strategy work, and review-cycle triage) and we tend to see the same overall pattern year after year: Teams usually check the boxes, but they rarely build the narrative.
This is what repeatedly slows submissions down. FDA reviewers aren’t hunting for surprises. They’re hunting for clarity:
Why this predicate?
Why these differences?
Why this test plan?
Why should they accept your SE argument without another round of questions?
Our consultants (former FDA reviewers and long-time RA/QA submission leads) consistently push for a handful of deeper practices that go well beyond “follow the RTA checklist.”
We polled some of our busiest regulatory consultants to give you five of the most impactful ones mature teams use, but most firms overlook. If you want help strengthening your own submission strategy (or getting a wobbly one back on track), talk to us!
If premarket notifications are part of your life and you haven’t seen our discussion with one of our top 510(k) experts, Trey Thorsen, MS, RAC, be sure to give it a watch below.
1. Treat your predicate strategy as a real piece of regulatory engineering
Here’s a telling quote from one of our device consultants when we asked for input for this piece:
“I can tell when a team chose their predicate on page 12 of the submission rather than at the beginning of the project.”
A problem like this is felt in the seams of a submission. A lot of teams still default too quickly to the device that shares a product code or looks superficially similar. But the firms that consistently avoid extra cycles do something very different: they elevate predicate selection to the level of an actual design decision.
Here’s what that usually looks like in practice:
They map the entire landscape early—not just the obvious device, but everything that shares the intended use, indication formulation, technology class, performance paradigm, and regulatory history.
They examine where their device diverges from each candidate and evaluate which predicate makes the SE argument strongest, not simply feasible.
They use reference devices proactively and transparently, not defensively.
And they document the rationale like an engineer would: “Here’s the alternative, here’s why it introduces more questions of safety/effectiveness than it resolves, and here’s why we’re not using it.”
This level of rigor changes the submission’s tone in ways reviewers want to see. The SE section feels intentional rather than stitched together. The reviewer senses the team has already asked and answered the questions the FDA is about to ask.
Companies that are new to premarket notification or stay glued to rigid workflows tend to underestimate just how powerful that is.
2. Use the pre-sub to shape the regulatory story, not to check the “we met with FDA” box
Pre-subs are the closest thing the 510(k) pathway has to a “design review” with the FDA. It’s such a huge opportunity. But many teams still treat them like a courtesy touchpoint, or worse, a chance to ask very broad questions that don’t meaningfully move the submission forward.
By contrast, when we’re supporting a client and know a device has any complexity (novel sensor modality, borderline predicate alignment, new materials, unusual energy delivery), we treat the pre-sub as the moment the regulatory narrative actually takes shape.
The difference between a high-value and low-value pre-sub can be huge:
High-value: “Here are our two strongest predicate options, here is why we believe Device A is the better regulatory fit, here are the specific differences we’ve identified, and here is the preliminary test plan we propose to close those gaps. Do you agree?”
Low-value: “Do you see any issues with our plan?”
One invites collaboration, the other invites the FDA to raise entirely new concerns you could (and probably should) have uncovered yourself.
When teams bring the FDA a concrete, defensible position (an early version of their SE logic), the pre-sub becomes a guardrail. It narrows the reviewer’s expectations, prevents surprises, and often eliminates a full cycle of avoidable back-and-forth.
We’ve seen pre-subs save months, and we’ve also seen vague pre-subs cost months. The difference is intentionality.
3. Build a Substantial Equivalence argument that assumes the reviewer has never seen your device
One of the biggest submission failures we see is the assumption that a table of differences plus verification testing automatically equals substantial equivalence. It doesn’t t in the reviewer’s mind.
FDA reviewers are trained to look for the questions a device raises, not just whether you completed the tests you said you would. If the narrative doesn’t explicitly walk them through how each difference was assessed and closed, they will ask. And each time they have to ask, it invites delays. It’s rarely a one-and-done clarification.
The strongest submissions we see do something subtle but incredibly effective: they build the SE section like a story.
Not a marketing story, an engineering story that’s written for non-engineers.T hey:
Explain the predicate choice as the foundation.
Explore each difference like a branch on a tree—materials, software, indications, energy, accessory interactions.
Tie each branch directly into risk analysis and the corresponding test, evaluation, or justification.
Show their work: how they explored whether each difference introduced different questions of safety or effectiveness and how they demonstrated those questions were resolved.
What emerges is a coherent narrative that answers the reviewer’s concerns before they’re voiced. When reviewers say, “This was a very clear submission,” this is what they mean.
Also, when writing this story, make sure it’s not “impressive-sounding” technical engineering-speak. A reviewer wants to read something clear and easy to follow that doesn’t demand background on your device or product area. Translate it for them.
4. Treat RTA criteria as a blueprint for information architecture, not a screening tool
We see a surprising amount of frustration around RTA, even among experienced teams who have used the premarket notification pathway many times. But when we dig into rejected submissions or ones that required extensive clarification, the issue usually isn’t the content. It’s the layout.
The most “mature” teams we work with don’t “prepare the 510(k)” and then check the RTA. They do the opposite: they design the submission’s structure directly from the RTA checklist and relevant device-specific guidelines. This changes the review experience dramatically.
A reviewer opening the submission immediately sees:
A clear, navigable structure where the required content is exactly where it’s supposed to be.
Crosswalks where complexity exists (e.g., software, usability, cybersecurity).
Relevant testing and labeling excerpts embedded directly where the reviewer needs them, not buried in appendices.
A submission that anticipates how reviewers search, skim, and verify content.
The submission becomes easier to assess. The reviewer’s cognitive load decreases.
And every question you prevent is time saved.


