Skip to content

Digital twins in market research: what brands need to know

James Webb (Co-founder - Talent Hatch)
James Webb (Co-founder - Talent Hatch)

Digital twins are moving into the mainstream, with more businesses running pilot studies and commissioning digital twin approaches.

From some of the conversations I've had and looking at the millions of dollars being made across the synthetic data space, it's pretty clear that digital twins may become an important part of the modern insight toolkit.

In this article we look at

  1. What a digital twin is
  2. Why digital twins are moving from experimentation to wider adoption
  3. Key companies and recent investments
  4. The best digital twin use cases

What a digital twin actually is

A digital twin is a model of a real person, audience or segment. It's built from real inputs such as primary research (it can be quant or qual research), behavioural data and other category-relevant information and then used to simulate likely responses to new questions or scenarios.

Digital twins are often discussed alongside synthetic respondents, but the two are not quite the same. Synthetic respondents are usually positioned as simulated respondents used to answer survey questions. Digital twins are often presented more broadly, as ongoing models of individuals or communities that can be used to test scenarios, extend earlier research and explore likely reactions over time.

Why research buyers are paying attention

Most client-side insight and analytics teams are under pressure to deliver faster answers, support more stakeholders and do more with limited time and budget. In many organisations not every commercial decision can wait for a new survey, a fresh community or a full piece of bespoke research taking several months. In a lot of businesses many decisions are still made using internal opinions, past learnings or gut instinct.

Digital twins are attractive because they offer something more structured than instinct, while still being faster and cheaper than commissioning new research each time.

I don’t think digital Twins will replace all primary research, but they will help bring insight teams a consumer perspective into more decisions than before. Synthetic approaches mean you can test more scenarios than normal research (and budgets) would usually allow for. Synthetic approaches are better than running no research at all or just simply asking a generic LLM.

Where digital twins may add value

Digital twins appear strongest in directional work rather than absolute measurement. They may be useful when the research objective is to explore, pressure-test or refine decisions, but less useful when the requirement is statistically robust evidence with tight margins of error.

The strongest use cases appear to be:

  • Early-stage concept exploration
  • Message and proposition refinement
  • Scenario testing
  • Innovation support
  • Simulating likely reactions before commissioning larger studies.

For many insight teams the challenge is not a lack of data, quite the opposite! It’s that they cannot realistically run fresh research for every question the business asks. Digital twins may help fill part of that gap by extending the value of existing research and making earlier learning more usable. Used well they can make insight teams more responsive and more embedded in decision-making.

What research buyers should be cautious about

Their usefulness depends heavily on what they are built from. If the underlying data is weak, out of date, too generic or poorly matched to the category, geography or use case, the outputs are likely to be weak as well - “bad data in, bad data out". The quality of the model, the way information is structured, the refresh cadence of the data and the validation process all matter massively.

Research buyers should also be cautious about any suggestion that digital twins can simply replace real-world research. They may support directional learning, but they are not the same thing as validated respondent evidence. Higher-risk decisions still need stronger proof (a sentiment that seems currently shared by many insight leaders I've met with recently). Of course there are trade-offs with these things e.g. time, budget, statistical accuracy etc.

For research buyers, the practical questions are straightforward:

  • What is the model built from?
  • How recent is the underlying data?
  • How often is the data updated?
  • How category-specific is it?
  • How is it validated?
  • What should this output be used for, and what should it not be used for?

Digital Twins and synthetic data products are clearly here to stay and look set to grow in years to come.

A growing number of research companies are building products around synthetic audiences, synthetic respondents and digital twins, and the level of investment suggests this is becoming a serious capability area rather than a passing trend.

In the UK, Electric Twin is one of the most well-known pure play names in this space. They describe their offer as a synthetic audiences platform and recently announced a further $10m in funding.

Livepanel launched N-Infinite in 2024 and moved it into a SaaS model later that year. CulturePulse raised a $1m seed round in 2023 followed by a €1.45m Series A in early 2025. BluePill raised a $6m seed round in November 2025. Simile announced a $100m Series A in February 2026, one of the largest funding rounds in this space.

Verve has recently developed Verve Intelligent Personas & Simulations. Savanta have recently entered the synthetic data space with the launch of Virtual Personas by Savanta, an AI-powered research platform, which acts as a digital twin for consumer insights.

There are also signs that this space is moving beyond start-up experimentation into strategic acquisition for bigger agencies. Yabble, which positioned its offer around virtual audiences, raised NZ$3m in 2021 and was then acquired by YouGov in August 2024.

Companies such as MOSTLY AI and Gretel (now acquired by NVIDIA) suggest that serious money has already moved into the broader synthetic data layer too. Even if not every company sits directly in synthetic respondents, the direction of travel is pretty clear.

I have a complete list of digital twin and synthetic data companies in this space, if you'd find that useful feel free to contact me at james@talenthatch.co.uk 

What this means for research buyers

The real advantage doesn't come from buying access to a new tool but from knowing how to use it intelligently. Research buyers will need to get sharper at judging methodological fit. They will need to know when a synthetic respondent or digital twin is good enough for directional input, and when the decision risk is high enough that they should go back to real people. They will need to understand grounding, validation and the likely limits of the model before treating any output as decision-ready.

Summary

Digital twins are worth taking seriously, not because they make traditional research redundant, but because they may give insight teams a new way to bring consumer input into decisions between formal studies.

The combination of better tools, stronger product development and growing investment means research buyers should understand the space now. What matters is not whether a platform can simulate an answer. What matters is whether you know when that answer is useful, when it is risky, and when it still needs validating with real people.

Are you excited about the possibilities synthetic data offers research & analytics? Drop a comment below to share your view. 👇

Hiring in the synthetic data space, or thinking about your next move? Feel free to get in touch with me at james@talenthatch.co.uk 

__

👋 I’m James Webb the founder of Talent Hatch.

Share this post