Skip to main content Skip to footer
  • "com.cts.aem.core.models.NavigationItem@62600515" Careers
  • "com.cts.aem.core.models.NavigationItem@60f91ebb" News
  • "com.cts.aem.core.models.NavigationItem@71a3e4ea" Events
  • "com.cts.aem.core.models.NavigationItem@7bc4c1a1" Investors
Cognizant Blog

Synthetic persona simulations can be valuable tools for consumer research, but it is worth knowing when to benefit from them and when to be cautious of possible misleading outcomes.

As extracting insights from this wealth of information becomes increasingly complex, organizations are investigating AI tools, such as synthetic persona simulations, to accelerate research processes and test ideas and scenarios. Teams can have the possibility to rehearse business decisions, ideas, and concepts before implementation in a safe environment, thanks to the AI-generated user cohorts, created from real consumer data.

Beyond industry developments and initiatives, there is growing scientific investment across multiple institutions to define the field of synthetic personas and their use. For example, a 2024 research paper by Google DeepMind and a Stanford University-affiliated research team reported building 1,052 generative agents using two-hour qualitative interviews. They found that the agents matched participants' General Social Survey responses, and that participants matched their own responses two weeks later (Park et al., 2024).

Synthetic personas cannot accurately estimate market outcomes or replace real-life consumer research. They bring something different and important to the innovation cycles. When well-calibrated on rich qualitative data, these simulation models can generate convincing, falsifiable hypotheses at a higher speed by analyzing large volumes of data that influence product development cycles and create a competitive market advantage.

However, the speed, flexibility, and creative space that synthetic simulations deliver in research could create serious risks. Synthetic research simulations can produce convincing user stories but may overlook empirical consumer data and experience, especially when they lack explicit governance controls. All of those could lead to deficits and unreliable research outcomes.

What Are the Risks and Governance Gaps of AI-Created Synthetic Persona Simulations for Executive Decision-Making?

Research shows that using synthetic personas can support experimentation and help companies close gaps in decision-making.  However, to confirm the quality of these outcomes, we must ask: under what constraints can these studies remain valid decision-support tools rather than become persuasive fiction? Below are the three most common failure modes arising from synthetic consumer simulations.


Figure 1: A horizontal infographic shows three icons (left to right): an abstract human‑shaped outline representing distortion of psychology; a grid of filled and unfilled dots representing distortion of representation; and a dartboard with scattered, off‑target points representing distortion of confidence.
 

1) Distortion of Psychology

Highly believable synthetic emotions not connected to lived human experience.

Hollow empathy refers to a simulation that appears highly emotionally conscious and can identify human fears and motivations. All, while overlooking the situational reasons that actually drive the behavior. The story of the synthetic simulation output can sound human-like but psychologically limited because it lacks the contextual basis for how and why this emotion arose.

2) Distortion of Representation

Failing to cover various consumer segments.

Distortion of representation occurs when synthetic personas tend to reflect the most likely patterns. This situation occurs when the system is designed without deliberate model input, including a variety of target consumer data. As a result, these tests can under-represent edge cases, minority experiences, and disturbing facts, the very material that traditional research reveals to avoid strategic blind spots.

3) Distortion of Confidence

Highly confident outputs that do not have traceability or evidence to back up the claim.

In LLM models, one of the most dangerous issues in their outputs is the production of overly confident claims that lack supporting evidence. The issue with these models is that they optimize for a convincing answer rather than for epistemic modesty or truth. As a result, teams resort to persuasive arguments to justify decisions they already intend to make, rather than finding new knowledge of consumers' perspectives, experiences, and feelings.

When to use AI-generated synthetic simulations and when not to use them, and why?

It depends on whether the decision can be reversed and the project's stakes.

To understand the concept of synthetic persona simulation, we shall consider a practical analogy: a flight simulator. Pilots rehearse rare, high-risk scenarios in regulated settings rather than while flying with real passengers. They are testing potential scenarios that could occur in real-world settings within simulated environments. Based on their experiences during the simulated flights, they adapt their actions and learn from their failures. So that real human passengers can have safer flights. The simulation environment is for informational purposes only. Synthetic simulations can serve a similar purpose. When conducted well and with awareness of their limitations, these tests serve as early warning systems and guide businesses.

The best place to start experimenting is when the stakes are low and the decisions are reversible. Some example experimenting topics could be iterating on marketing messaging, consumer onboarding flows, feature positioning, early concept screening, and internal strategy options. These areas where errors are inexpensive and course correction would be relatively easy.

The second-best use case is when the stakes are high, the project is under significant pressure, and actions could be reversible. These scenarios would be great for creating, exploring, and experimenting with the simulation tools to generate a variety of ideas and feedback, enabling faster progress with more diverse options. That could help surface testing assumptions while confirming that the final material is treated as understanding or counsel rather than as decision criteria.

When the stakes are low, but reversibility is also low, AI simulators can be used only for exploratory purposes, such as major product or marketing changes. One helpful approach is to use simulators to generate hypotheses and identify where to focus user research. The commission then targeted human research to confirm and authenticate its findings.

The least recommended scenario for using simulations is high-stakes, low-reversibility situations. It is essential not to allow synthetic outputs to influence decisions related to employment, eligibility, safety, healthcare, credit, or similar areas that could harm people. At most, only recommended to use the simulations to red-team assumptions, instead of just last actions or decision support material.

In short, as the project's stakes rise and the reversibility of outcomes declines, governance and real-user validation must assume the lead. 

Figure 2: A strategic matrix plotting decision stakes against reversibility. It defines four zones: “exploratory with caution” for low‑stakes/low‑reversibility tasks; “explore freely” for low‑stakes/high‑reversibility tasks; “test but validate with humans first” for high‑stakes/high‑reversibility tasks; and “do not rely on synthetic outputs” for high‑stakes/low‑reversibility tasks.

Conclusions

To sum up, one of the greatest risks in AI models, especially those affecting large communities without their awareness, lies in the absence of a controlled approach. Applying this perspective to synthetic personas, simulations can offer organizations and research teams opportunities to learn, experiment, and explore. They create paths for more innovative ideation by enabling the discreet testing of novel or high-risk concepts (Greenwald, 2024). And, those who build governance early will define the next era of research-based decision-making and gain a market advantage through rapid market innovation.

References

(Note: This reference list follows APA 7th edition style.)

Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction: Eighteen best practices for human-centered AI design. In CHI 2019. https://doi.org/10.1145/3293663.3293673

Greenwald, M. (16 August 2024). When implemented correctly, synthetic personas reduce creative risk and maximize upside. Forbes. https://www.forbes.com/sites/michellegreenwald/2024/08/16/synthetic-personas-done-right-reduce-creative-risk-and-maximize-upside/

DiNapoli, J. (10 December 2024). Toothpaste maker Colgate testing new product ideas on 'digital twins'. Reuters. Retrieved 12 February 2026, from https://www.reuters.com/business/retail-consumer/toothpaste-maker-colgate-testing-new-product-ideas-digital-twins-2024-12-10/

Kim, T., Hong, M. K., Chen, Y., Li, J. Q., Van, M. P., Hakimi, S., Kay, M., & Klenk, M. (2026). Personagram: Bridging personas and product design for creative ideation with multimodal LLMs. arXiv preprint. https://doi.org/10.48550/arXiv.2602.06197

Li, A., Chen, H., Namkoong, H., & Peng, T. (2025). The LLM-generated persona is a promise with a catch. arXiv preprint. https://doi.org/10.48550/arXiv.2503.16527

Mokander, J., & Floridi, L. (2024). Operationalising AI governance through ethics-based auditing: An industry case study. arXiv preprint arXiv:2407.06232. https://doi.org/10.48550/arXiv.2407.06232

Park, J. S., Zou, C. Q., Shaw, A., Hill, B. M., Cai, C., Morris, M. R., Willer, R., Liang, P., & Bernstein, M. S. (2024). Generative agent simulations of 1,000 people. arXiv preprint 2411.10109. https://doi.org/10.48550/arXiv.2411.1010

Sarstedt, M., Adler, S. J., Rau, L., & Schmitt, B. (2024). Using large language models to generate silicon samples in consumer and marketing research: Challenges, opportunities, and guidelines. Psychology and Marketing, 41(4), 162–167. https://doi.org/10.1002/mar.21982

Sajith, A., & Kathala, K. C. (2024). Is training data quality or quantity more impactful on small language model performance? arXiv preprint arXiv:2411.15821. https://doi.org/10.48550/arXiv.2411.15821

 


Melek Akan

Senior Associate Product Design, Research & Strategy, Cognizant

Melek Akan




Latest posts
Related posts