top of page

The False Safety of Bigger Samples

  • Writer: Paul Peterson
    Paul Peterson
  • 1 day ago
  • 6 min read

Corporate decision-makers have a familiar reflex when risk starts to feel uncomfortable.


Get more data.


Run a bigger survey. Broaden the sample. Add more respondents. Make the findings more projectible. Cast the net wider so no one can say the team missed something important.


Sometimes that instinct is right.


If you are sizing a market, measuring awareness, estimating consideration, forecasting adoption, or comparing segments, quantitative research can do its job well. It helps teams understand prevalence, scale, and difference.


But product and innovation decisions often require a different kind of understanding.


They require teams to figure out what is breaking down, what customers are trying to accomplish, where the current experience is failing, which tradeoffs matter, and what improvements would make the product more useful in real life.


Those are not always “how many” questions.


Often, they are “what is really going on here?” questions.


And when that is the question, bigger samples can create false safety.


They make the team feel more protected without necessarily making the decision stronger.


When Data Becomes Protection


I see this often in product, innovation, and insights work.


A team has a hard decision in front of them. The stakes are real. The organization wants confidence. Someone asks for a survey.


Not always because the survey can answer the real question.


Because a big sample and a neat chart feel like protection.


A chart can be pointed to. A percentage can be defended. A statistically significant difference can settle a meeting, or at least appear to settle it.


That comfort has value. But comfort and clarity are not the same thing.


A large sample can smooth over the very problem the team needs to understand. When you blend together people who barely care, people with shallow exposure, and people already wrestling with the problem, you often get an averaged answer.


And averaged answers can be dangerous when the goal is innovation.


They create the appearance of consensus. They reduce the sharpness of the issue. They make the middle look more reliable than it really is.


The average customer can tell you what is broadly familiar, broadly acceptable, or broadly understood.


But the average customer is often the wrong starting point for understanding where a category is moving, what a more useful solution would need to do, or which product improvement would meaningfully change behavior.


For those questions, the middle is usually a poor place to start.


The Wrong First Question


One of the most common objections to small-sample qualitative work is predictable:

“But how do we know this is representative?”


It's a fair question.


But it's often the wrong first question.


Representativeness matters when the team needs to size something. It matters when you need to know how common a belief or behavior is across a defined population.


But when the team is trying to understand an emerging need, a broken experience, a workaround, a source of hesitation, or a possible innovation opportunity, the better first question is different:


Are we listening to the right customers?


A broad sample of lightly engaged customers may provide projectible data, but it may also provide shallow input. Many respondents will have limited category involvement.


Some will have weak opinions. Others will answer based on what sounds reasonable in the moment rather than what they have learned through use.


That does not make their views worthless. It makes their views less useful for certain decisions.


If the objective is to understand what deserves attention, the most useful input often comes from customers who have earned their opinions through experience.


They know the category. They understand the tradeoffs. They have tried the workarounds. They feel the cost of the problem now. They can explain what is not working without simply venting.


They are not asking for novelty for its own sake.


They are asking for usefulness.


Those are the customers I call Catalytic Customers.


Why Catalytic Customers Are Different


Catalytic Customers are experienced participants in a category who are highly engaged, able to articulate what they need, and constructively critical about what does and does not work.


They are not necessarily experts.


They are not influencers.


They are not simply early adopters.


And they are not always a company’s current customers.


A Catalytic Customer may be a customer of the category rather than a customer of the brand. In some cases, the most useful perspective comes from someone who has chosen a competitor, patched together an alternative, or rejected the category’s existing options because none of them solve the problem well enough.


These customers are valuable because they are close enough to the category to have meaningful experience, but not so specialized that their views become detached from practical use.


They see what average users may not yet notice. They can describe what current products force them to do. They often reveal where the category is becoming more demanding.


That does not make them statistically representative of the market as it exists today.

It makes them representative of where the category may be going.

For product and innovation teams, that can be far more useful.


Anecdotes Are Not the Problem


Another predictable objection is that small-sample qualitative work is “anecdotal.”


That objection has merit when qualitative work is handled poorly.


One customer quote should not drive a roadmap. A colorful comment should not become a strategy. A single frustrated user should not be treated as the voice of the market.


But the problem is not qualitative evidence.


The problem is lazy interpretation.


Anecdotes are dangerous when they are treated as proof. They are useful when they are treated as evidence.


A pattern across several deeply engaged customers can expose something a dashboard may never show clearly:


A workaround customers have normalized.


A moment of confusion in the buying or usage journey.


A tradeoff customers are already making.


A gap between what the product promises and what it helps people accomplish.


A job customers are trying to get done in spite of the product rather than because of it.


The discipline is not collecting quotes. It is interpreting them responsibly.


Good qualitative work does not stop at “customers said.”


It has to answer harder questions:


What pattern are we seeing?


What do we believe it means?


Which decision should it inform?


What evidence would change our mind?


That is where small-sample qualitative work becomes more than a set of interesting comments.


It becomes a structured input into judgment.


The Right Sequence


This is not an argument against quantitative research.


It is an argument against using quantitative research as a substitute for judgment.


Quant is useful when the team knows what needs to be measured. It is less useful when the team is still trying to understand what deserves to be measured.


The better sequence is often:


First, listen deeply to the right customers.


Then, develop a sharper decision hypothesis.


Then, decide what needs validation, sizing, or stress-testing.


Then, use quantitative research where it can do its real job.


In that sequence, qualitative work does not compete with quantitative work. It makes quantitative work better.


It helps the team avoid wasting a large survey on shallow assumptions. It clarifies language. It identifies the moments that matter. It surfaces the tradeoffs worth testing. It gives the team a more useful understanding of what risk needs to be reduced.


Catalytic Customers help teams move from vague uncertainty to sharper uncertainty.


And sharper uncertainty is much easier to work with.


The Real Risk


The real risk in product and innovation work is not that the team listens to a small number of the right customers.


The bigger risk is that the team listens broadly, averages everything, and mistakes the resulting comfort for insight.


A broad sample can tell you what many people think.


That can be useful.


But it may not tell you which customers have the most useful perspective, which

problems are worth solving, which tradeoffs matter, or where the category is starting to move.


For that, you need a different starting point.


You need customers who are engaged enough to notice.


Experienced enough to compare.


Constructive enough to help.


Focused enough on usefulness to push the product toward something better.


Those customers may not give you consensus.


They may give you something more valuable: a sharper understanding of what deserves to change.


That is not a replacement for rigor.


It is a different kind of rigor.


And for many product and innovation decisions, it is the kind teams need before they reach for the bigger sample.

Comments


Copyright 2026 CoinJar Insights LLC

bottom of page