Customer-Centric Strategy

by Anton Lundberg & Joachim Rask

May 11, 2026

What Customer Focus Looks Like Inside a Product Organisation

Measuring customer satisfaction is not the same as being customer focused. Most product organisations know the difference in theory. This is about what it takes to close the gap in practice.

Back to blog

Most product companies have already made the diagnosis. They know the gap between believing you are customer focused and actually operating that way. They can describe inside-out thinking, recognise it in their planning processes, and articulate what outside-in strategy would require. If you haven't read What Is Outside-In Strategy or Inside-Out vs Outside-In Strategy, those articles cover the distinction. This one starts from where they leave off.

The harder question is what happens next. Because understanding the problem and changing it are two very different things. We've worked with leadership teams who had real clarity on the diagnosis and still found, eighteen months later, that the internal logic was winning the same arguments it always had. The issue wasn't awareness. It was that the organisational structure, the incentive design, and the sequence of decisions consistently pulled in a different direction.

Customer focus is not a mindset shift that follows from a good offsite. It's a set of structural changes that either happen or don't. Three of them tend to determine whether it takes hold.

Where insight enters the process

The most important lever is not how much customer insight the organisation gathers. It's where that insight enters the decision-making process.

In most product organisations, the sequence runs like this: someone proposes a direction, options get shortlisted, investment gets considered, and then customer input arrives as a validation step. The research is real. The interviews happened. The findings get presented. But by the time they reach the table, the frame has been set. The options on the shortlist reflect what the organisation already believed. Customer insight gets filtered through that frame. It confirms one of the options, challenges the detail of another, and rarely changes the direction of travel.

"The insight is present in the room. It just isn't upstream of the decisions."

Shifting this requires something more specific than 'involving customers earlier.' It means restructuring which conversations happen before which. Portfolio decisions on which segments to prioritise, which products to invest in, which markets to enter or exit need to begin with a genuine account of where customer needs are heading and where the current offer falls short. Not as background context. As the frame within which options are generated. The structural approach to building this into strategy is covered in How to Build a Customer-Centric Strategy.

The practical test is straightforward. Before the shortlist forms, can the leadership team give a specific, honest answer to this question: what do customers in each of our target segments actually value right now, and what would have to change about our offer for us to be their obvious choice in three years? If that question gets a vague or internally-derived answer, the strategy process is still starting from the inside.

What happens when customer signals contradict internal investments

Every organisation that has committed significant resources to a product direction has built internal momentum behind it. People championed it. Leaders approved it. Teams built careers around delivering it. When a customer signal arrives that challenges that direction, the pressure to reinterpret it is considerable.

We've watched this play out in ways that are subtle enough that the organisation doesn't recognise what's happening. The signal gets assessed: is this customer representative of the segment, or an outlier? Is this feedback about the product or about how it was implemented? Is their need shifting, or are they describing a temporary frustration? These are all legitimate questions. But when they're asked reflexively, when the internal logic consistently passes the test and the external signal consistently doesn't, the organisation has developed an immune system that rejects outside-in thinking without appearing to.

The diagnostic is not whether the organisation challenges customer signals. It's what happens to the signals that survive the challenge. In organisations where customer focus is real, a validated signal that contradicts an internal investment changes what happens to that investment. Resources move. Timelines shift. Commitments get reconsidered. In organisations where it isn't, the validated signal gets acknowledged, filed, and the investment continues on schedule.

This is where shared accountability matters more than process design. When commercial teams own revenue targets and product teams own delivery against roadmap commitments, customer relevance becomes a secondary concern for both. It lives inside the CX function, which produces insight, presents it, and has no formal power to act on it. The research is real. The accountability for acting on it belongs to no one.

"Customer focus as a function produces insight. Customer focus as a principle changes what gets built."

What changes this is commercial and product leads owning customer outcomes as part of the same accountability they own for revenue and delivery. Not a separate metric. The same one. When the question "are we still relevant to the customers we're building for?" is part of how product and commercial performance gets evaluated, the signals that used to get reinterpreted start getting acted on instead.

What the incentive structure is actually rewarding

Organisations eventually do more of what they measure. If commercial teams are measured on volume and product teams on delivery against roadmap commitments, customer outcomes are structurally secondary — regardless of what the values statement says.

This isn't a values problem. It's a design problem. And it's one of the more difficult ones to address because the metrics that drive inside-out behaviour are usually the same ones that tell a credible story to the board. Revenue, margin, and on-time delivery are real measures of real performance. The issue isn't that they're wrong. They're insufficient on their own.

The organisations that have made this work have added a layer of leading indicators that sit upstream of revenue: measures of customer relevance, segment share within target accounts, and how customer needs are shifting relative to the current portfolio. These are harder to quantify and less tidy to present. They're also the numbers that tell you whether the business is moving with the market or behind it.

We've seen companies where NPS had been rising for three consecutive years and the strategy team was simultaneously watching a new entrant take share in two of their most important segments. The satisfaction score was real. It just wasn't measuring the thing that was about to matter.

The shift starts with the planning cycle. If the first serious conversation about next year begins with last year's revenue mix and this year's capacity, the strategy will be inside-out regardless of how much customer research gets added later. If it starts with an honest account of where customer needs are heading and which parts of the current portfolio address that future, the strategy has a chance of staying relevant.

The diagnostic question

There is one question we've found useful with leadership teams trying to be honest with themselves about where they actually stand.

Take the last three significant product or commercial decisions your organisation made: investments committed, segments prioritised, markets entered or exited. For each one, ask: what was first in the room? A customer problem that demanded a response, or an internal asset, existing capability, or gap in last year's targets?

Most leadership teams, when they look closely, find that internal logic was first more often than the external signal. That's not a failure of character. It's the predictable result of planning processes that start from the inside.

The question is useful not because it's damning but because it's specific. It points to a change in sequence rather than a change in values. And it's a change that can be made deliberately, one planning cycle at a time, without restructuring the entire organisation before the work begins.

Customer focus is not a score, a function, or a values statement. It's a decision about where the conversation begins, and who owns what happens when the answer from outside contradicts what was already on the table.

Key takeaways

Customer focus is not about how much insight the organisation gathers. It's about where that insight enters the decision-making process. If it arrives after the shortlist has been formed, it validates rather than shapes.

The clearest test of whether customer focus is structural: when a validated customer signal contradicts an internal investment, does it change what happens to the investment?

Housing customer focus inside a single function produces insight without accountability. Real customer focus requires commercial and product leaders owning customer relevance as part of their core performance accountability, not as a secondary metric.

Incentive structures that measure volume and delivery consistently produce inside-out behaviour, regardless of stated intent. Leading indicators of customer relevance need to sit upstream of revenue in how performance gets evaluated.

The diagnostic: look at your last three significant decisions and ask what was first in the room: a customer problem or an internal gap. The pattern is more revealing than any satisfaction score.

FAQ

What is the difference between customer focus and customer satisfaction? Customer satisfaction measures how people feel about what you've already built. Customer focus shapes what you decide to build in the first place. An organisation can score well on satisfaction and still be losing relevance. If customer needs are shifting and the portfolio isn't moving with them, high satisfaction today doesn't protect the business tomorrow.

Why do product organisations with genuine commitment to customers still end up inside-out? Because the planning processes, roadmap-setting mechanisms, and incentive structures were built around internal logic, and those structures consistently win arguments at the moments that matter. The people are genuine in their commitment. The design of the organisation pulls in a different direction.

How do you change where customer insight enters the process? By restructuring the sequence of conversations. Customer and market understanding needs to be in the room before the options are formed. Not as validation for directions already chosen, but as the frame within which options get generated. Practically, this means portfolio and segment decisions begin with an honest account of where customer needs are heading, before the internal view of capability and investment has set the frame.

What does shared accountability for customer outcomes look like? It means commercial and product leaders owning whether the organisation is relevant to its target customers, with the same accountability they carry for revenue and delivery, not as a separate CX metric. When customer relevance is something only the insights team reports on, it becomes a service. When it's part of how product and commercial performance gets evaluated, it changes behaviour.

How does this connect to commercial architecture? Directly. Where to play, which segments to prioritise, and what value proposition to build are all decisions that produce different outcomes depending on whether they're grounded in genuine customer understanding or internal revenue history. Outside-in commercial architecture holds margin better, generates more defensible positioning, and avoids investing in products that address problems customers have already solved another way.

Recognize any of these organization?

If this resonates, there's a good chance we can help. Let's have a straight conversation about where you are.

Let's connect