Innovation & Offer Development

|

by Anton Lundberg & Joachim Rask

|

April 14, 2026

How to Validate a New Offering Before You Build It

Three questions should be asked before any new offer goes into development. Is the need real? Are we the right provider? Will customers pay? They almost never get asked.

Back to blog

There's a moment in most new offer development processes where three questions should be asked out loud — and almost never are. Is the need actually real, in the way we're imagining it? Are we the right provider to solve it? And will customers pay, at a level that makes this worth building? The internal logic tends to feel sufficient. The team is energised. The capability gap looks manageable. So the questions go unasked, development begins, and the first honest answers arrive when the offer meets the market.

By that point, the cost of being wrong has compounded considerably. Months of development, a growing backlog, a launch date on the calendar — and a market with a different view. We've seen this pattern more times than we'd like. It's not a failure of ambition or resource. It's a sequencing problem: those three questions were answered internally, with assumptions, rather than externally, with evidence.

Knowing how to validate a new offering before you build it is one of the most commercially underrated capabilities in product companies. Most teams discover its value the hard way.

Validation is the work, not a checkpoint

Most teams treat validation as a late-stage gate — a pilot, a soft launch, a review before the rollout. Something you do once the offering is largely formed. The problem is that by that point, you've already made the expensive decisions: what the product does, how it's delivered, what it costs to serve. Running validation after the build is like stress-testing a bridge after the traffic starts moving. You'll learn something. But your options are much narrower than they were six months earlier.

The shift that matters is moving validation to the hypothesis stage, before a single line of specification is written. Starting with benefit areas — the specific outcomes customers are trying to secure — rather than with a solution to validate is the foundation of this approach.

"The harder question isn't whether this works as designed. It's whether anyone actually needs it, in the way you're imagining it — and whether you're the right provider to deliver it."

That question sounds simple. It is genuinely difficult to answer without talking to customers in a structured way — and most organisations skip it because the internal logic feels sufficient, the team is energised, and asking feels like it might slow things down. It doesn't slow things down. It redirects effort before it compounds.

What good validation looks like in practice

Envac, the Swedish pneumatic waste collection company, faced this situation when exploring a new service offering for their international markets. The internal hypothesis was clear: an adjacent service opportunity their existing infrastructure and technical expertise could support. The capability gap was manageable. The market opportunity looked real.

Rather than build toward it, they started by talking to customers — structured conversations designed around each of the three validation questions in turn. Was the need real, and how urgent? Were Envac the credible provider for it? And would customers actually pay, at what terms, under what conditions?

What those conversations revealed reshaped the offering. Some assumed needs weren't felt as acutely as internal analysis suggested. Others were more urgent and more specific than the team had mapped. One customer problem that wasn't on the original brief turned out to be the strongest signal of real demand. The offering that went to development looked meaningfully different from the one that had been scoped — smaller in some ways, more focused in others, and priced against a different value driver entirely. That's validation doing what it's supposed to do.

Three questions your validation needs to answer

Not all customer conversations are validation. Asking your best customers whether they'd find something useful is a poor proxy for demand — they tend to be generous, and their context may not reflect the broader market. Validation works when it's designed to answer three specific questions, each requiring a different approach.

Is the need real? This means conversations structured around the customer's current situation, not your proposed solution. You're listening for frequency, urgency, and the workarounds they've already built — the homemade fixes that signal a problem worth solving. A need that customers have learned to work around is often a stronger signal than one they're actively looking to solve.

Are we the right provider? Even a real, urgent need doesn't automatically make you the logical answer. Customers assess providers on credibility, delivery capability, and relationship — and in B2B markets, switching cost and trust matter as much as the offer itself. This question is most often skipped, which is why offerings with strong underlying logic fail at the commercial stage.

Will they pay? Willingness to pay is best tested through direct conversation, not surveys. The most reliable signal isn't a yes or no — it's the specificity of the conditions: at what price point, under what contract terms, in what timeframe. Vague enthusiasm is not a buying signal. Specific conditions are.

When you have clear, convergent answers to all three, you're ready to build. When you're still unclear on one of them, you're not — regardless of how confident the internal team feels.

Start before you're ready

The most common objection is that validation takes time the team doesn't have. In our experience, five well-structured customer conversations — run before the next development brief goes to sign-off — will tell you more than any internal review. You're not looking for consensus. You're looking for signal: the specific, repeated, commercially grounded indication that the need is real, that you can serve it, and that customers will pay for the outcome.

When that signal is there, build with confidence. When it's absent or mixed, that's the most valuable information your team can have. The cost of building wrong isn't measured in budget alone. It's the credibility of the team, and the organisation's appetite for the next idea.

Key takeaways

Validation that happens after the build is too late to change the decisions that matter most — sequence it before development commitments are made.

Three questions need clear answers before you build: Is the need real? Are you the right provider? Will they pay? Vague enthusiasm is not a substitute for convergent, specific answers to all three.

Structured customer conversations should focus on the customer's current problem and workarounds, not on your proposed solution. You're listening for signal, not collecting endorsements.

Willingness to pay is tested through direct conversation, not surveys. The reliability of the signal lies in the specificity of the conditions a customer names — price point, contract terms, timeline.

Five well-structured customer conversations before the next development brief goes to sign-off will redirect more effort, and save more resource, than any internal review process.

Recognize any of these organization?

If this resonates, there's a good chance we can help. Let's have a straight conversation about where you are.

Let's connect