Blog

Josiah Wegner Josiah Wegner

The Best SBIR/STTR Consulting Firms (2026 Guide)

Most SBIR consulting firms focus on training, execution, or past experience, but funding outcomes in 2026 increasingly depend on strategy and positioning. The top firms differ in how they approach proposal development, commercialization, and alignment with agency priorities. Choosing the right partner means understanding these differences and selecting a firm that fits your stage, goals, and competitiveness within the program.

Choosing the right SBIR/STTR consulting firm can significantly impact your chances of securing non-dilutive funding.

With over $4B in annual federal funding available through SBIR/STTR programs, competition is intense and the difference between winning and not often comes down to strategy, positioning, and execution.

Quick Summary: Top SBIR Consulting Firms (2026)

Here’s a high-level overview of the most recognized SBIR/STTR consulting firms and what they are best known for:

  • BBCetc
    Best for first-time applicants
    Core strength: Training and SBIR education

  • Dawnbreaker
    Best for commercialization
    Core strength: Market strategy and transition support

  • Eva Garland Consulting
    Best for traditional grant writing support
    Core strength: Grant frameworks and guidance

  • REOFTech
    Best for full-service consulting
    Core strength: End-to-end support

  • BW&CO Consulting
    Best for highest probability of funding
    Core strength: AI + human + validated positioning

Each firm takes a different approach, ranging from training and execution to strategy and positioning. The right choice depends on your stage, needs, and how competitive your target program is.

Top SBIR/STTR Consulting Firms

1. BBCetc (BBC Entrepreneurial Training & Consulting)

Best for: First-time applicants and SBIR training

BBCetc is one of the most established organizations in the SBIR ecosystem, with over 30 years of experience supporting companies through training, proposal development, and program navigation.

Strengths:

  • Strong training programs and workshops

  • Deep experience across agencies

  • Structured SBIR education

Considerations:

  • More education-focused than strategy-driven

  • May be less tailored for advanced applicants or those who are familiar with grant applications

2. Dawnbreaker

Best for: Commercialization and market strategy

Dawnbreaker is widely known for its commercialization support and has worked with thousands of SBIR/STTR awardees.

Strengths:

  • Strong focus on commercialization pathways

  • Market research and partner development

  • Long-standing agency experience

Considerations:

  • More focused on post-award support

  • Less emphasis on early-stage positioning

3. Eva Garland Consulting

Best for: Structured SBIR strategy and guidance

Eva Garland Consulting supports startups and researchers with grant strategy, education, and proposal development.

Strengths:

  • Strong strategic frameworks

  • Educational resources and tools

  • Experience across agencies

Considerations:

  • Based more on historical experience

  • Less emphasis on real-time positioning

4. REOFTech SBIR Consultants

Best for: End-to-end consulting

REOFTech provides full-service SBIR consulting across multiple agencies.

Strengths:

  • End-to-end support

  • Experience across NIH, DoD, DOE

  • Commercialization alignment

Considerations:

  • Broader scope may reduce specialization

  • Varies by engagement

5. BW&CO Consulting

Best for: Companies seeking the highest probability of funding

BW&CO represents a newer model of SBIR consulting providing end-to-end support based on how funding decisions are made today.

Strengths:

  • AI-driven analysis of agency behavior

  • Human expertise aligned with reviewer mindset

  • Direct validation with Program Officers before submission

What makes it different:
BW&CO emphasizes:

  • identifying precise programmatic fit

  • validating alignment directly with agencies

  • positioning proposals based on real-time signals

The Bottom Line

Most SBIR consulting approaches fall into three categories:

  • training-focused

  • execution-focused

  • experience-driven strategy

A newer category is emerging:

validation-driven, data-informed SBIR strategy

The firms that perform best in 2026 are those that:

  • combine data with human insight

  • validate alignment with agencies

  • focus on positioning, not just writing

Read More
Josiah Wegner Josiah Wegner

Has Your SBIR Consultant Adapted to What NIH SBIR Funding Has Become in 2026?

Most SBIR firms rely on experience or execution, but NIH funding decisions in 2026 go beyond what’s written. Winning proposals align with real-time IC priorities through AI-driven insight, human expertise, and direct Program Officer engagement. The teams that succeed aren’t guessing—they validate fit with NIH before submission and position accordingly.

by Josiah Wegner - CEO

If you’re evaluating NIH SBIR consulting firms and newer AI proposal tools, you’re asking the right question:

Which approach gives me the highest probability of winning an NIH SBIR today, not three years ago?

Grant writing has changed in two major ways in 2026 and most firms haven’t fully adapted.

The Two Shifts Reshaping Grant Funding

1. AI has fundamentally changed what’s possible

AI is no longer optional.

Firms that ignore AI are denying their clients access to capabilities that didn’t exist even a few years ago, including:

  • the ability to analyze large volumes of agency data and past awards quickly

  • identifying patterns in what actually gets funded

  • surfacing strategic positioning opportunities that are easy to miss manually

Without these tools, decisions are based on limited visibility and narrower experience.

At the same time, relying only on AI creates a different problem.

AI-generated proposals often:

  • flatten technical depth

  • miss nuance in how ideas should be framed

  • fail to capture unstated priorities or context

They also lack the human judgment needed to interpret gray areas, ask the right questions, and adjust strategy based on subtle signals.

In some cases, over-reliance on AI can even introduce risk.

As of September 2025, the NIH explicitly states: “Applications that are either substantially developed by AI… will not be considered by NIH.” 

The advantage today isn’t avoiding AI or replacing people with it.

It comes from combining AI-driven insight with human judgment — using each where it adds the most value.

2. NIH Priorities Are Shifting Faster Than Most Applicants Realize

What worked even a few years ago is not a reliable guide today.

This is being driven in large part by recent policy changes under the current Trump administration, which are actively reshaping how agencies define priorities, evaluate proposals, and allocate funding.

We’re seeing real-time shifts such as:

  • stronger emphasis on different aspects of health than in the past

  • giving more power to NIH Program officers by eliminating paylines

  • tighter alignment between funding decisions and national priorities

These are not gradual changes; they have alredy happened.

That creates a growing disconnect:

  • many applicants are still positioning proposals based on how NIH operated in the past: Reviewers held most of the power.

  • while NIH is funding  proposals based on how well they align with their internal priorities: POs hold most of the power.

In some cases, proposals that would have been competitive last year are no longer aligned with what NIH is looking for today.

Success now depends on understanding:

What NIH’s priorities are today and how to position your work accordingly.

Where Traditional Options Fall Short

Strategy-led firms

Strategy-led firms bring strong experience and structured processes. They’ve supported many successful applications and understand the grant landscape well.

But their model is still largely built on human-driven experience and historical pattern recognition.

That creates a limitation:

  • Insights are often based on what has worked before

  • There is less ability to systematically analyze large volumes of current data

  • Quality can vary depending on the individual consultant and time constraints

In a slower-moving environment, this worked well.

In today’s environment, it can lag behind shifting priorities.

Execution-focused firms

Execution-focused firms have strong teams and real experience supporting proposal development. They are known for quality talent and can provide meaningful support in organizing and executing submissions.

Their model emphasizes:

  • writing and submission support

  • structured execution processes

However, it places less emphasis on:

  • identifying precise programmatic fit

  • validating alignment with NIH before submission

  • shaping proposals around NIH priorities

You get capable execution—but not always strategic clarity upfront.

And in today’s environment, execution alone is not enough.

AI Proposal Tools

There is a growing category of tools that promise to generate proposals using AI.

They can be helpful for speed.

But they come with clear tradeoffs:

  • No validation of positioning (the most important thing)

  • Technical depth often gets flattened

  • No understanding of reviewer psychology

And critically:

NIH policy now makes clear that proposals substantially generated by AI may not be considered original and may not be reviewed competitively.

These tools accelerate writing but do not solve the hardest problem: knowing what to write and how to position it.

The BW&CO “MAP” Approach

We built our “MAP” approach specifically for the new 2026 SBIR environment.

M - Market Gap

We identify the specific gap your company fills within an Institute or Center (IC)

We use AI-powered analysis to study:

  • IC priorities

  • past awards

  • Other relevant IC signals

This allows us to pinpoint where your work fits in a way most applicants miss.

But we don’t stop at analysis.

We help you validate that positioning through direct outreach and conversation with NIH.

Because not everything that drives funding decisions is written down.

NIH often operates with:

  • informal preferences

  • emerging priorities not yet fully published

  • interpretations of program language that only become clear through discussion

By engaging directly through calls, emails, or conversations, you gain insight into how that particular IC is actually thinking, not just what is written.

Most applicants are guessing where they fit.

We help you confirm it with NIH directly.

A - Aligned Writer

Your proposal is shaped by someone who understands your field and thinks like a reviewer.

While aligning with programmatic priorities is key in 2026, a well written and scored proposal is still essential to get funded.

Working with a writer that understands your field avoids three common failure modes:

  • founders writing proposals that are too dense or unclear

  • generalist writers missing important technical details

  • AI-generated drafts that lack depth

Our approach combines the best elements of each — without their downsides.

You bring the deep expertise.

We bring someone who understands your field well enough to ask the right questions, surface what matters, and shape it clearly.

We also reduce the burden on your team.

Instead of starting from scratch, we work from:

  • materials you’ve already developed

  • prior proposals

  • technical documents and internal content

We use those inputs to build structured drafts quickly, so you’re not spending hours writing from a blank page.

That means:

  • you don’t lose technical depth

  • your time isn’t consumed writing

  • and the proposal is structured in a way reviewers can immediately understand

Because the person shaping the proposal is close to your domain, they can:

  • ask better questions and identify key gaps

  • highlight what matters most

  • structure the narrative in a way reviewers recognize

And since reviewers often come from similar technical backgrounds, the proposal is aligned with how it will actually be evaluated.

P - Proven Intelligence

We apply lessons that our team has amassed by assisting companies with hundreds of proposals to every submission.

In many firms, outcomes vary based on who you work with and how much time they have.

In practical terms, that often means this:

You might be working with someone who has submitted 5–15 proposals. They’ve had a few wins, learned some lessons, and developed their own approach.

And that’s valuable — but it’s also limited to their individual experience.

So when they review your proposal, the feedback you receive is shaped by:

  • what they’ve personally seen

  • what they remember

  • and what they have time to apply

That becomes the ceiling of insight going into your submission.

Our approach works differently.

With every proposal — no matter who you’re working with on our team — you benefit from the accumulated lessons of hundreds of submissions, across various ICs, outcomes, and scenarios.

We’ve taken those lessons:

  • what led to wins

  • what caused proposals to fall short

  • how reviewers responded in different contexts

And we’ve embedded them into our process using AI and structured review systems.

What that looks like in practice:

Instead of one person saying,

“In my experience, this section could be stronger…”

Your proposal is evaluated against patterns like:

  • how successful proposals frame innovation vs. risk

  • where reviewers tend to get confused or lose confidence

  • which sections consistently correlate with higher scores

  • how positioning shifts based on agency priorities

And those insights are applied consistently, not just when someone happens to think of them.

This is how AI has changed the game.

It allows us to move beyond individual experience and apply collective experience at scale.

So instead of getting the benefit of one person’s 10 proposals,

you’re getting the benefit of experience across hundreds of proposals applied systematically to yours.

And that’s a level of consistency and depth that traditional models simply can’t match.

The Bottom Line

Most options fall into one of three categories:

  • experience-driven firms → strong, but limited by human bandwidth

  • execution-focused firms → organized, but not deeply strategic

  • AI tools → fast, but lacking depth and positioning

BW&CO is built differently.

We combine:

  • validated positioning based on real agency signals and direct conversations with those agencies

  • human insight aligned with how reviewers think

  • AI that scales experience across hundreds of proposals

Because in today’s environment, winning isn’t about writing more proposals.

It’s about submitting the right proposal, positioned the right way, with a full understanding of both the written and unwritten rules behind funding decisions.

Contact us to learn more.

Read More
Josiah Wegner Josiah Wegner

We’re Guessing What the Government Actually Wants

Federal solicitations, especially DoD topics, tell you the rules but they rarely reveal the full picture of what the government actually wants. Winning proposals go beyond compliance by interpreting mission drivers, funding realities, and unstated operational needs through informed conversations, funding history, and strategic positioning. The teams that succeed are not guessing blindly; they are reading between the lines and aligning their solution to the problem behind the problem.

by Kristian Mueller - Chief Grants Officer

If you’ve ever read a federal solicitation-especially a DoD SBIR/STTR topic-and thought “I can follow every instruction here and still miss the mark,” you’re not wrong.

One of the least talked-about realities of federal proposal writing is this: the solicitation is necessary, but rarely sufficient. The strongest proposals don’t just comply-they anticipate what the government actually wants, even when they can’t say it outright.

After years of writing and reviewing proposals for technology startups, and working directly with engineering teams and government stakeholders, we’ve learned that winning proposals live in the space between strict compliance and educated interpretation.

Let’s talk about what that really means.

Yes, You Must Follow the Solicitation-But That’s the Floor, Not the Ceiling

Let’s get this out of the way first:
You must follow the solicitation requirements exactly.

Page limits, formatting rules, evaluation criteria, topic scope, cost caps-these are non-negotiable. Noncompliance can get you rejected before anyone reads your brilliant technical approach.

But here’s the uncomfortable truth:
Hundreds of proposals follow the rules. Very few get funded.

Reviewers aren’t scoring you on whether you can read instructions. They’re scoring you on whether your solution aligns with a mission need that often isn’t fully spelled out.

That’s where reading between the lines comes in.

Especially for DoD: Some of the Real Requirements Aren’t Written Down

DoD solicitations are a special case. Many topics are intentionally vague, constrained, or abstracted-sometimes because of classification, sometimes because the program office itself is still refining the requirement.

When a topic says something like:

“Innovative approaches for resilient, next-generation sensing in contested environments”

What it might mean is:

  • They’ve had a system fail in the field

  • A prime contractor solution is too expensive or fragile

  • They need something deployable in 18–36 months

  • They want optionality, not a single locked-in architecture

None of that will appear explicitly in the solicitation.

Your job-and ours-is to infer those drivers without inventing things that aren’t defensible.

How Do You Figure Out What’s Not in the Solicitation?

This is where many small companies struggle, especially first-time proposers. The good news: there are ways to reduce uncertainty and make informed guesses instead of blind ones.

1. Talk to the Humans Behind the Topic (Yes, You’re Allowed To)

For most SBIR/STTR and BAA opportunities, you can-and should-reach out to:

  • Topic Authors

  • Technical Points of Contact (TPOCs)

  • Program Managers or Directors

These conversations won’t give you proprietary information, but they can clarify:

  • What success looks like at the end of Phase I or II

  • Whether the topic is exploratory or transition-driven

  • What applications they care about most

  • What approaches they’ve already seen (and aren’t excited about)

A short, well-prepared email or call can save months of misaligned proposal work.

2. Read the Agency’s Funding History Like a Signal, Not a Spreadsheet

Past awards are one of the most underused sources of insight.

Looking at what an agency or program office has funded before can tell you:

  • Technology maturity they’re comfortable with

  • Typical Phase II award sizes

  • Whether projects routinely transition-or stall

  • Which primes or end users show up repeatedly

For recurring BAAs or long-running programs, funding patterns often reveal strategy. Are they doubling down on a capability area? Spreading bets? Letting one domain quietly sunset?

Those patterns should influence how you position your technology.

3. Understand the Money Reality (Not Just the Maximum Award)

Solicitations often list a maximum award amount. That does not mean:

  • That amount is fully available

  • Every topic will be funded equally

  • The program isn’t already partially allocated

Experienced proposal teams pay attention to:

  • Number of awards anticipated

  • Typical funding per award in prior years

  • Whether this is a new call or a continuation

  • Signals of constrained or expanding budgets

This matters not just for pricing, but for scope credibility. Over-promising in a constrained funding environment is a fast way to lose reviewer confidence.

What Reviewers Are Actually Asking When They Read Your Proposal

Even when they’re scoring against formal criteria, reviewers are often implicitly asking:

  • Does this team understand our problem, not just the topic text?

  • Is this a real solution, or a research project looking for a use case?

  • Can this company execute, or will we be managing them?

  • Does this scale beyond Phase I/II?

  • Would I want to champion this internally?

Great proposals answer these questions without ever stating them explicitly.

So… Are We Guessing?

Yes-but not randomly.

Winning federal proposals are built on informed inference:

  • Grounded in the solicitation

  • Reinforced by agency behavior

  • Clarified through direct engagement

  • Shaped by real engineering constraints

At BW&CO, this is where our background in both technology development and proposal review matters. We don’t just translate your technology into government language-we help position it in the context of what the government is quietly trying to accomplish.

Because the truth is, the government often can’t say exactly what it wants.

But with the right approach, you can still give it to them.

Contact us to learn more.


Read More
Josiah Wegner Josiah Wegner

Our Technology Is Strong but Reviewers Don’t Get It

Founders often believe their technology is strong and reviewers simply “didn’t get it,” but in non-dilutive funding, clarity, alignment, and mission fit matter just as much as innovation. This article breaks down why strong science alone isn’t enough and how reframing your proposal through the reviewer’s lens can dramatically improve your chances of winning awards.

by Sameera Panchangam - Senior Grant Consultant

It is one of the most common and frustrating statements we hear from founders pursuing non-dilutive funding:

“Our technology is strong. The reviewers just didn’t get it.”

And often, the first part is true. The technology is strong.

But here is the uncomfortable reality:

If reviewers do not get it, the problem is rarely the reviewer.

It is almost always the proposal. Strong technology alone does not secure funding. Perspective does.

When You Are Too Close to the Product

Founders are deeply committed to what they build. You have spent years refining the science and engineering around constraints, protecting IP, and proving feasibility. That depth of knowledge is a strength.

But it can also create blind spots. You may be attached not only to the product, but also to the story you tell about it.

When we suggest reframing objectives, narrowing aims, repositioning impact, or even changing agencies, it can feel like we are undermining the core innovation.

We are not.

We are translating it for a reviewer who is seeing it for the first time.

Reviewers do not live inside your roadmap. They do not know your technical shorthand. They only see what is written on the page.

Clarity is not simplification. It is a strategy.

You May Be Married to the Wrong Agency

Another common challenge is agency misalignment.

For example, a founder building advanced sensing technology for crop optimization may target NSF because the platform is novel and technically sophisticated.

But the proposal emphasizes farm productivity, food security, and field validation across specific commodities. That may align more tightly with USDA priorities.

Similarly, a dual-use autonomy platform may feel like an NSF engineering proposal, yet its strongest case may lie in resilience, mission readiness, or logistics optimization in a defense context.

Each agency has a mission. Each division within that agency has specific priorities. Some are focused on basic science. Others are mission execution. Others are deployment and transition.

Defense agencies in particular evaluate not just technical novelty, but operational relevance, integration pathways, and end-user value.

Strong innovation in the wrong mission context is still misaligned.

Understanding the intent behind the funding opportunity is as important as understanding the science.

Who Is Actually Reviewing Your Proposal?

Review panels are diverse.

They may include:

  • Academic researchers

  • Industry technologists

  • Former founders

  • Program awardees

  • Domain specialists/Scientists 

  • Mission operators in defense contexts

They are trained to apply scoring criteria objectively. They are instructed to dissect feasibility, impact, risk, and alignment.

But they are also human.

They may review dozens or even hundreds of applications in a cycle. They are balancing this responsibility with full professional workloads.

If your proposal buries the value proposition, overloads the reader with technical detail before establishing the problem, or assumes prior knowledge, cognitive fatigue sets in quickly.

An exercise we recommend to founders:

Review your application once as a professor, evaluating rigor.
Then review it as an operator evaluating utility.
Then review it as a commercialization expert evaluating transition potential.

Would each of them clearly understand why this matters within the first few pages?

If not, revise.

The Broken Leg Test

Across sectors, one principle holds: Reviewers are looking for a clear and compelling problem.

In health, it may be a clinical gap.
In agtech, it may be yield loss or resource inefficiency.
In defense, it may be operational vulnerability or capability gaps.
In energy, it may be grid instability or storage constraints.

Whatever the domain, the problem must feel urgent and real.

If the pain point is vague or abstract, interest fades.

Reviewers are not there to infer the need. They are there to evaluate whether you have clearly articulated it.

If they cannot see the broken leg, they will not fund the cast.

One Major Weakness Can Influence Everything

Even when the technology is strong, a single glaring issue can affect overall perception.

It could be:

  • An unrealistic transition timeline

  • Weak validation data

  • A mismatch between objectives and methods

  • No clear path to end users

  • Lack of regulatory or compliance awareness

  • Insufficient understanding of field deployment constraints

Reviewers strive for objectivity. Still, an early red flag can create doubt that influences scoring across sections.

That is human nature.

Coherence matters. Internal logic matters. Alignment between problem, solution, validation, and impact matters.

Non-Dilutive Funding Is Mission-Driven

Unlike venture capital, government agencies are not investing in valuation growth.

They are investing in mission outcomes.

That mission may be:

  • National security

  • Agricultural resilience

  • Energy independence

  • Scientific advancement

  • Economic competitiveness

  • Public health

Your proposal must clearly connect your innovation to that mission.

The strongest applications consistently do three things:

  1. Demonstrate credible technical innovation

  2. Define a clear and urgent problem

  3. Align tightly with agency mission and program priorities

If one of these elements is weak, reviewers will notice.

What We See at BW&CO

At BW&CO, we spend significant time analyzing prior reviewer comments across agencies and sectors. Patterns emerge regardless of the domain.

Common critiques include:

  • Impact not clearly articulated

  • Transition pathway insufficient

  • Overly ambitious scope

  • Limited understanding of the end-user environment

  • Weak commercialization or deployment strategy

Studying prior reviews helps us understand how panels think. It reveals trends in what agencies emphasize year over year. It also helps founders step outside their attachment to the technology and into the mindset of the evaluator.

When you understand the reviewer’s incentives, you stop writing to explain your technology and start writing to justify investment in your mission alignment.

Final Thought

When proposals are not funded, it is easy to conclude that reviewers did not understand the innovation.

More often, they understood exactly what was presented.

They just did not see enough clarity, alignment, urgency, or feasibility to justify funding within their mission constraints.

Strong technology is the foundation.

But perspective wins awards.

If you are exploring non-dilutive funding across health, defense, energy, climate, or agriculture and wondering why strong science is not translating into awards, the issue may not be your core innovation.

It may be how it is framed, aligned, and communicated.

At BW&CO, we help founders see their proposals through the reviewer’s lens and build applications that speak not just to innovation, but to mission.

Contact us to learn more about how we can help your next proposal suceed.


Read More
Josiah Wegner Josiah Wegner

The Forgotten Customer: Why Health Tech Startups Overlook the VA

Most startups focus on DoD and ignore the VA. Here’s how biotech and health tech companies can align with VA health priorities and secure advocates.

By Robert Wegner - Chief Revenue Officer

When doing business with the government, most biotech, medtech, and health tech companies focus on one place: the Department of Defense.

If they don’t have something that directly supports the active warfighter, they usually move on. They assume government work is not for them.

That’s a mistake.

What most founders forget is that the Department of Veterans Affairs operates 170 medical centers and more than 1,000 outpatient sites. It serves over 9 million enrolled veterans each year. It is one of the largest integrated healthcare systems in the country.

If you are building technology for chronic disease, aging, mental health, oncology, or rehabilitation, the VA is not a side option. It is a major healthcare customer.

Ignoring it leaves opportunity on the table.

The VA Is a Real Healthcare System — Not a Side Program

The VA is not just a benefits administrator.

It delivers care every day. It runs specialty clinics. It manages long-term chronic conditions. It treats middle-aged and geriatric populations at scale. It funds research. It buys equipment. It runs clinical studies.

If your product is built for hospitals, health systems, or aging populations, there is a strong chance it applies to the VA.

The key is alignment.

Not “we improve healthcare.”

But “we solve this specific problem the VA has publicly said it cares about.”

Health Priority Areas: Where Alignment Often Exists

The VA maintains a wide range of research and clinical priority areas. These are tied directly to patient needs across its system.

Several common areas of interest include:

Alzheimer’s disease and related dementias.
Arthritis and musculoskeletal disorders.
Brain health, including mental health and PTSD.
Cancer, including prostate and ovarian cancer.
Cardiovascular disease.
Diabetes, obesity, and kidney disease.

These are not niche categories. They reflect the real disease burden inside the veteran population.

If you are building diagnostics for early dementia detection, remote cardiac monitoring, oncology tools, metabolic disease platforms, AI-enabled mental health support, or rehabilitation technologies, there may be direct overlap.

The question is not whether the VA “funds innovation.”

The question is whether your product addresses a defined health priority within its system.

If you can clearly connect your solution to one of these areas, you are in a stronger position than most startups who approach government work.

Securing Internal Advocates

Alignment on paper is not enough.

You need a human inside the system who believes in what you are building.

Once you identify a relevant health topic, your next step is to find an internal advocate. This is someone working within or closely affiliated with the VA who understands the disease state and sees how your solution could help veterans.

This could be at the local level, such as a clinician at the VA Palo Alto Health Care System. It could also be at the national level, such as someone connected to the VA Office of Research & Development (ORD).

These advocates matter for three reasons.

First, they help you refine your value proposition so it actually fits VA workflows and patient populations.

Second, they guide you toward the right funding mechanisms or pilot pathways.

Third, they provide credibility when you pursue formal studies, grants, or procurement vehicles.

Without internal support, most companies stall. They submit cold applications. They wait for responses. They burn time.

With internal support, conversations move faster and become more focused.

Funding and Procurement Pathways

Once you have meaningful engagement from an internal advocate, you can begin evaluating funding and procurement options.

There are multiple pathways depending on your stage.

If you are still in pre-clinical or early clinical development, research-focused mechanisms may be appropriate. In some cases, programs like CDMRP can provide significant funding for assets that align with veteran health priorities.

If your product is further along and ready for deployment, there may be procurement pathways within the VA itself. This can include structured pilot programs, clinical studies, or direct purchasing mechanisms for solutions that address defined needs.

The mistake many startups make is chasing mechanisms first.

They search for open calls. They draft proposals. They try to “fit” themselves into whatever funding vehicle is available.

That rarely works.

The more effective approach is this:

  1. Confirm alignment with a VA health priority.

  2. Secure internal support.

  3. Then identify the right mechanism.

When done in that order, funding becomes a tool — not a gamble.

The Practical Takeaway

If you are a biotech, medtech, or health tech founder, do not assume government work equals defense contracts.

The VA is a massive healthcare provider. It treats millions of patients across many of the exact disease areas commercial startups target.

Before you dismiss government engagement, ask:

Does our product clearly align with a VA health priority?
Can we identify a specific patient population within the system?
Do we have a plan to secure an internal advocate?

If the answer is no, you need refinement.

If the answer is yes, you may have a serious opportunity.

If you are considering the VA as a co-development partner or procurement customer, book a call with our team. We’ll help you assess alignment, identify realistic entry points, and determine whether pursuing the VA makes strategic sense for your stage.

The VA is often the forgotten customer.

For the right company, it shouldn’t be.

Read More
Josiah Wegner Josiah Wegner

Stop Wasting Time on Low-Probability Grant Opportunities

For many founders, researchers, and small business teams, grant funding is supposed to be a catalyst, not a distraction. Yet too often, teams find themselves pouring months of effort into applications that were never realistically competitive in the first place. Here’s how to prioritize which grants to apply for.

by Narges Tafreshi - Grant Consultant

For many founders, researchers, and small business teams, grant funding is supposed to be a catalyst, not a distraction. Yet too often, teams find themselves pouring months of effort into applications that were never realistically competitive in the first place.

The result? Burned time, exhausted teams, and stalled momentum.

The Hidden Cost of “Shot-in-the-Dark” Applications

Grant writing isn’t just about filling out forms. It requires:

  • Strategic planning

  • Scientific and technical alignment

  • Budget modeling

  • Internal coordination and reviews

When an opportunity has a very low probability of success, the real cost isn’t just the submission fee or consultant time; it’s the opportunity cost. Those months could have gone toward product development, customer discovery, partnerships, or higher-probability funding paths.

Why So Many Grant Opportunities Are Low Probability

Many grant solicitations look promising on the surface, but hide structural challenges that drastically reduce competitiveness:

  • Overly broad solicitations that attract hundreds of applications

  • Unpublished or shifting paylines, especially in SBIR/STTR programs

  • Institute-specific preferences that aren’t obvious from the FOA

  • Portfolio-balancing decisions that have little to do with scientific quality

Without insight into how funding decisions are actually made, teams often assume that a “decent” application is good enough. In reality, competitiveness is relative, and often far more selective than it appears.

Scoring Alone Doesn’t Tell the Whole Story

NIH and other agencies rely on impact scores and review outcomes, but funding decisions are rarely mechanical.

A technically strong application can still fall short if:

  • It doesn’t align tightly with current program priorities

  • The institute’s budget is constrained in that cycle

  • Similar projects were recently funded

  • The application lands just outside a practical zone of consideration

This is why many applicants receive solid reviews, and still no award.

A Smarter Question to Ask Before Applying

Instead of asking, “Can we apply?”, more teams are now asking:

“Is this opportunity actually worth our time?”

High-probability opportunities tend to share a few characteristics:

  • Clear alignment with institute or program priorities

  • Realistic competitiveness based on recent funding patterns

  • A defined scoring or selection pathway

  • A credible strategy to address reviewer risk

If those elements aren’t present, even a well-written application may be a long shot.

Fewer Applications, Better Outcomes

The most successful grant-funded teams are not applying to everything. They are:

  • Selective about where they invest effort

  • Strategic in how they use prior scores and reviewer feedback

  • Focused on opportunities where marginal improvements can meaningfully change outcomes

This approach doesn’t reduce ambition, it increases efficiency.

Moving From Hope-Based to Strategy-Based

Grant funding will always involve uncertainty. That’s inherent to competitive, peer-reviewed systems. But uncertainty does not have to mean guesswork, and it doesn’t have to mean going it alone.

A strategy-based approach replaces blind optimism with informed confidence. It allows teams to make deliberate decisions about where to invest their time, energy, and scientific effort before months are spent writing.

By prioritizing fit, probability, and timing, organizations can move away from low-yield opportunities and toward a funding roadmap that is both realistic and forward-looking. This includes understanding which programs are actively funding work like yours, how competitiveness is assessed, and when small adjustments can meaningfully improve outcomes.

Importantly, this approach doesn’t limit opportunity, it focuses it. Many strong ideas fail to secure funding not because they lack merit, but because they were submitted to the wrong mechanism, the wrong institute, or at the wrong moment. Strategy turns those uncertainties into informed choices.

When teams apply selectively and intentionally:

  • Applications become clearer and more compelling

  • Reviewer feedback becomes more actionable

  • Resubmissions are more efficient and targeted

  • Funding outcomes feel repeatable, not random

The goal isn’t to submit more applications. It’s to win the right ones, at the right time, with the right positioning. With a strategy-driven approach, grant funding becomes not just possible, but increasingly predictable.

Need help with crafting your grant strategy? Contact our team today.

 

Read More
Josiah Wegner Josiah Wegner

Understanding NIH Impact Scores

As NIH review results are released, many applicants are asking the same question: how should I interpret my outcome, and what does it mean for funding? Recent NIH review cycles have introduced clearer distinctions in how applications are categorized, making it especially important to understand what your result actually signals.

By Narges Tafreshi - Grant Consultant

As NIH review results are released, many applicants are asking the same question: how should I interpret my outcome, and what does it mean for funding? Recent NIH review cycles have introduced clearer distinctions in how applications are categorized, making it especially important to understand what your result actually signals.

How NIH Review Outcomes Are Assigned

NIH study sections now generally place applications into one of three categories during peer review:

1. Not Discussed

Applications in this category were not selected for full discussion by the study section. This typically means reviewers felt the application was not competitive relative to others in the pool.

  • No overall impact score is assigned

  • Written critiques are still provided

  • These applications are generally not considered for funding in the current cycle

While disappointing, this outcome can still provide useful feedback for a substantial revision or repositioning.

2. Not Discussed – Competitive

This is a newer and important distinction. These applications were not discussed, but reviewers considered them borderline or potentially competitive.

  • No overall impact score is assigned

  • Reviewer comments often indicate specific, fixable weaknesses

  • These applications may be close to discussion thresholds in strong review rounds

A “Not Discussed – Competitive” outcome often signals that a targeted resubmission could meaningfully improve funding chances.

3. Scored

Applications that are discussed by the study section receive an overall impact score, typically ranging from 10 (best) to 90.

  • Lower scores indicate stronger perceived impact

  • Being scored means the application cleared the initial competitiveness bar

  • A scored application can still fall inside or outside likely funding ranges, depending on institute priorities and budget

From Impact Score to Funding Decision

Impact scores are evaluated in the context of institute-specific paylines or zones of consideration. A payline represents the approximate score threshold below which applications are more likely to be funded, based on available funds and application volume.

  • Applications scoring better than the payline are more likely to be funded

  • Applications just outside the payline may still be considered, especially with strong programmatic relevance

  • Applications well outside the payline typically require either exceptional justification or resubmission

It’s important to note that many NIH institutes do not publish formal paylines, particularly for SBIR and STTR programs.

What’s New in Current NIH Funding Cycles

Recent funding cycles have placed increased emphasis on:

  • Programmatic fit, not just numerical rank

  • Portfolio balance and strategic priorities

  • Institute-level discretion, particularly for small business awards

As a result, applicants with competitive scores near traditional paylines are seeing more variability in outcomes. Early communication with Program Officers (POs) has become increasingly valuable, especially for applications that are scored or labeled “Not Discussed – Competitive.”

SBIR and STTR Applications: Additional Context

For SBIR (R43/R44) and STTR (R41/R42) applications, funding decisions depend on more than the impact score alone. Reviewers and institutes also weigh:

  • Commercial and translational potential

  • Technical feasibility and risk

  • Alignment with institute-specific missions

Because SBIR/STTR paylines are often unpublished, understanding how your review outcome is categorized is a key signal for next steps.

What to Do After You Receive Your Outcome

Once your NIH review results are released:

  1. Identify whether your application was Not Discussed, Not Discussed–Competitive, or Scored

  2. Carefully review reviewer comments for common themes

  3. Compare your outcome to recent institute funding patterns

  4. Consider outreach to your Program Officer to assess competitiveness and strategy

  5. Decide whether revisions could materially improve your positioning

While no single outcome guarantees funding, understanding how NIH now categorizes applications allows you to make more informed, strategic decisions about resubmission and next steps.

If you’d like our team’s free assessment, click here.

Read More