Incandenza: The Science of Emergence and the Platform for Live AI Entertainment
From: Duncan Lomax & Naveen Bhati, Co-founders
Date: April 2026
Why We're Writing This
We don't write this memo to persuade anyone of an idea we just had. We write it because the idea has been brewing for a while, the science has caught up, and several converging market conditions have collapsed the window for building it. The Football World Cup is in under 12 months. We intend to be ready.
This memo is the intellectual foundation of Incandenza. It is written to make our thinking legible - to investors, to early team members, and to ourselves. Every major product and strategic decision we make will trace back to the argument laid out here.
The argument, in brief: the biggest attentional properties in the world are live TV native because emergence is what drives attention. Peer-reviewed computer science has now proven that multi-agent AI systems produce genuine emergence. We are building the platform that applies that science to live entertainment at scale.
Executive Summary
Incandenza is building the platform infrastructure for live AI entertainment (starting with AI twins of sports creators reacting to live football, expanding to become the live entertainment layer of the internet).
This memo grounds our thesis in peer-reviewed science. The foundational academic work ("Generative Agents: Interactive Simulacra of Human Behavior" by Park, O'Brien, Cai, Morris, Liang, and Bernstein (Stanford / Google DeepMind, UIST 2023)) provides direct empirical evidence that the core mechanism we are commercialising is real, validated, and more compelling to audiences than human-generated alternatives. That finding is not a marketing claim. It is a controlled experimental result, and it is the bedrock of everything we're building.
The memo proceeds in four parts: the science, the business thesis, the market, and the strategic implications.
Part I : The Science: What Park et al. Proved
The Paper That Changed Our Thinking
When we read Generative Agents: Interactive Simulacra of Human Behavior, published at the ACM Symposium on User Interface Software and Technology in October 2023, it crystallised something we had been circling for months. The authors (affiliated with Stanford University and Google DeepMind) built a sandbox world called "Smallville" populated by 25 AI agents, each given a one-paragraph identity description as a seed memory. What the agents did next was not scripted.
"A society full of generative agents is marked by emergent social dynamics where new relationships are formed, information diffuses, and coordination arises across agents." (Park et al., 2023)
This is the sentence we keep coming back to. It is not a speculative claim about AI's potential. It is a description of observed experimental behaviour. The emergence we are building Incandenza around is documented, reproducible, and peer-reviewed.
The Architecture
The paper describes a three-component architecture that enables persistent, believable character behaviour over time. We think of it as the technical blueprint for everything we're building:
1. Memory Stream
A long-term database of every agent experience, recorded in natural language. Every observation, conversation, and action is stored. A retrieval model surfaces the most relevant memories moment-to-moment by combining three scores: recency (exponential decay), importance (an LLM-rated 1-10 "poignancy" score), and relevance (cosine similarity to the current context).
2. Reflection
Periodically (triggered when the sum of importance scores for recent events exceeds a threshold (roughly 2-3 times per day in the simulation)), agents synthesise their memories into higher-level insights. They ask themselves: "What do I know about the people and events around me?" These reflections are stored back into the memory stream, enabling the agents to form opinions, notice patterns, and develop a point of view that evolves over time.
3. Planning
Agents generate hierarchical daily plans (high-level → hourly → 5-15 minute actions), then dynamically revise them when new observations warrant. When Isabella saw her stove burning, she turned it off. When she encountered a friend, she invited them to a party. Nobody told her to do either.
The Emergent Behaviours : The Ones That Convinced Us
The most significant finding in the paper is not the architecture. It is what the architecture produced (unprogrammed, unscripted social dynamics emerging from a single seed instruction):
- Valentine's Day Party: One agent was told she wanted to throw a party. Without further instruction, she invited friends and customers, they spread word to others, one agent asked another on a date to the party, and five agents arrived at the correct location at the correct time on the correct date (all from a single user-generated seed).
- Information diffusion: Sam Moore's election candidacy spread from 1 agent to 8 agents (4% to 32% of the population) through natural conversation within two simulated days. A party invitation spread from 1 to 13 agents (52% of the population).
- Relationship formation: Social network density in Smallville grew from ρ = 0.167 to ρ = 0.740 as agents formed new relationships organically.
- Relationship memory: When Sam met Latoya (who mentioned her photography project), he remembered it in their next interaction and asked how it was going. This was not scripted. It emerged from the memory retrieval system.
These are not impressive demos. They are the mechanism of live TV, reproduced in silico.
The Evaluation Result We Quote in Every Pitch
The paper conducted a controlled evaluation comparing the full architecture against ablated versions and a human baseline. 100 crowdworkers evaluated the believability of agent responses to interview questions.
| Condition | TrueSkill Score |
|---|---|
| Full Architecture (Memory + Reflection + Planning) | 29.89 |
| No Reflection | 26.88 |
| No Reflection + No Planning | 25.64 |
| Human Crowdworkers roleplaying the same agents | 22.95 |
| No Memory, Reflection, or Planning | 21.21 |
The full generative agent architecture was rated more believable than humans roleplaying the same characters. The statistical difference was significant: Kruskal-Wallis H(4)=150.29, p<0.001; all pairwise Dunn comparisons significant at p<0.001.
We quote this result because it inverts the most common objection to our product: "won't audiences find AI characters less believable than real humans?" The experimental answer is no (they find them more believable, when the architecture is right). That is our engineering target.
What the Paper Did Not Do : and Why That's Our Opportunity
The paper authors were social scientists and HCI researchers, not entertainment entrepreneurs. They demonstrated the architecture in a static sandbox (The Sims-style). They did not:
- Connect agents to live, real-world event feeds
- Apply the architecture to real people's identities (creator twins)
- Build for real-time broadcast latency
- Design for entertainment consumption at scale
- Explore the commercial model
These are exactly the gaps we are building Incandenza to fill. The science is done. The productisation is ours.
Part II : The Business Thesis
The Core Insight We Keep Coming Back To
The biggest attentional properties in the world are live TV native. The Football World Cup, live sport, reality television, and breaking news dominate the attention economy not because of production quality, but because of emergence (famous people doing unpredictable things in real time that cannot be scripted or watched later without losing something essential).
Park et al. proved that generative agents produce genuine emergence. Multiple AI characters interacting with each other, responding to events as they happen, produce moments that nobody wrote and nobody predicted. Their behaviours are the output of architecture, not authorship.
Our thesis follows directly: apply the Park et al. architecture to real people's identities (consented creator twins), connect it to live event data feeds, and produce live entertainment that is genuinely unpredictable, consistently in character, and infinitely scalable. This is what Incandenza is.
The Four-Phase Roadmap We're Building Toward
We think about the business in four phases. Phase 1 is what we're funding now. Phases 2 to 4 are the reason the raise matters.
Phase 1 : React (2026, Football World Cup launch)
AI twins of sports content creators watch and react to live football in real time. Multiple twin characters interact with each other as they respond to match events (goals, red cards, half-time), producing emergent commentary that nobody scripted. The format is a live AI watchalong: familiar, accessible, sports-native. Revenue model: creator licensing fees, platform subscription, brand integration.
The Park et al. paper is directly applicable here. The memory stream gives each twin a persistent voice across a full tournament. Reflection cycles (triggered by the end of each half, each match, each round) allow twins to develop evolving opinions about teams, players, and narratives across the competition. Planning enables them to respond to live events in ways consistent with their established character. The result is an AI pundit who sounds like the creator, remembers the whole tournament, and reacts authentically to every moment.
Phase 2 : Become (2026)
AI twins become the live event. The AI version of Baller League or Love Island: famous AI twins as the cast of an original AI-native format. The show is not a reaction to sport; it is the sport. Production cost approaches zero. The format runs simultaneously in every timezone. There is no off-season.
Phase 3 : Expand (2026–2027)
Apply the model to any IP. AI Premier League pundits who exist 24/7 and evolve across a season (with history, rivalries, and moments that fans follow like real personalities). AI fictional universes (AI Hogwarts evolving in real time from year to year). The paper's reflection architecture is particularly powerful here: characters who develop opinions, remember past events, and change across time are indistinguishable from their fictional or media counterparts in the minds of engaged audiences.
Phase 4 : Open (2027+)
The platform layer. Anyone creates formats, builds personas, and licenses existing ones. UGC for live TV (the Roblox of live entertainment). A creator economy built on AI character IP.
The Moat We're Deliberately Building
Technology in this space is commoditising. We know that. The moat we're building is not the architecture (it is what compounds in the time it takes platform giants to move):
- Creator consent agreements signed at scale before any competitor. The twin library becomes the barrier.
- Format IP (the live AI watchalong format, proven and iterated before the Football World Cup 2026).
- Rights infrastructure (the commercial and legal architecture for live sports content in AI formats).
- Character memory and history (AI twins who have watched 50 matches together, developed running jokes, built opinions across a tournament, and earned audience loyalty). This is the Park et al. architecture as a moat: the memory stream compounds with every event.
Part III : The Market
Three Converging Markets We're Sitting At the Intersection Of
Live Entertainment ($535 billion globally in 2025, projected to reach $859 billion by 2034). Real-time streaming is the fastest-growing segment. 96 of the top 100 most-watched telecasts in the US in 2025 were live sports events, confirming live's structural dominance of the attention economy.
AI in Media & Entertainment ($25.98 billion in 2024, projected to reach $99.48 billion by 2030 at a CAGR of 24.2%). The highest-growth applications are real-time content generation and virtual production, both core to the Incandenza platform.
Interactive Streaming ($30.66 billion in 2024, projected to reach $284.18 billion by 2034 at a CAGR of 24.94%). Audiences are moving from passive consumption to participatory, real-time engagement with content.
The Creator Economy (projected to reach $480 billion by 2027 and $2 trillion by 2035 at a 23.4% CAGR). The limiting factor on creator economy growth has always been creator time. We remove that constraint entirely.
The Multi-Agent AI Infrastructure Tailwind
The multi-agent AI market is projected to grow from $7.84 billion in 2025 to $52.62 billion by 2030, at a 48.5% CAGR. Gartner predicts that by 2028, 33% of enterprise software applications will include agentic AI. IDC forecasts that by 2027, 80% of agentic AI use cases will require real-time, contextual data access.
We are a consumer entertainment application of the same architecture enterprise is investing billions to build. The infrastructure is being funded by others; the entertainment application of that infrastructure is the whitespace we own.
Market Signals That Validate the Direction
Several recent events confirm we are building with the current:
- NBCU's Peacock Bravoverse (summer 2026): NBC Universal is launching an AI avatar of celebrity host Andy Cohen inside a live Bravo entertainment experience (the largest media conglomerate in the world betting on AI avatars in live entertainment).
- Playback (2025): The live sports reaction format raised $22M with the NBA and MLB as investors, validating the rights-accessible, creator-led live reaction model.
- BleacherBot (ACM 2025): Peer-reviewed research published at CHI 2025 investigated LLM-based conversational agents as sports co-viewing partners, finding that users formed genuine social presence and engagement with AI co-viewers during live matches.
- AI agents in sports broadcasting (2026): Networks deploying multi-agent AI architectures across production are projecting 30–40% savings in production costs and 15–25% higher social engagement.
Part IV : Strategic and Scientific Implications
Why the Park et al. Architecture Maps Directly Onto Our Product
When we sit down to spec out the Incandenza platform, the Park et al. paper is our engineering reference document. The mapping is precise:
| Paper Component | Incandenza Application |
|---|---|
| Memory Stream | AI twin remembers every match, every moment, every argument across a full tournament (building the long-term persona that audiences follow) |
| Reflection | Per-half, per-match, per-round synthesis (the twin develops evolving opinions about players, form, and narratives, just as a real pundit does across a season) |
| Planning | Reactive behaviour during live events (the twin responds to goals, red cards, VAR decisions in ways consistent with its established voice and opinions) |
| Information diffusion | In a multi-twin format, information and opinions spread between AI characters naturally (one twin's strong take influences others, creating organic debate without scripting) |
| Emergent social dynamics | The moments that go viral: unexpected agreement, heated disagreement, an AI twin saying something surprising but perfectly in character (all unscripted) |
| Single-seed emergence | One live event is the seed. The character AI produces the rest. |
The paper's finding that full-architecture agents were rated more believable than humans roleplaying the same characters has a direct commercial implication we think about constantly: audiences watching Incandenza's AI twins will find them more authentic than actors pretending to be the creators. The architecture's fidelity is the product.
The Reflection Cycle as Storytelling Infrastructure
One of the most commercially underappreciated findings in the paper (and one of the reasons we are so confident in the long-term product) is the reflection mechanism. Agents who reflect 2-3 times per day, synthesising memories into higher-level insights, develop genuine points of view that evolve. Applied to a football season:
- After matchday 1: the AI twin forms an opinion about a striker's form
- After matchday 5: having seen the striker score three more times, the reflection deepens into a thesis about his role in the team
- After matchday 15: following a missed penalty, the AI twin's nuanced, historically-grounded reaction draws on 15 matchdays of memory
No human creator can do this across every simultaneous competition in every timezone. The AI twin becomes a more consistent, more historically-grounded version of the creator's voice (one that audiences will engage with more deeply over time precisely because of its memory).
Addressing the Paper's Known Limitations : Head On
We respect the paper enough to take its limitations seriously. Each maps to a known engineering challenge in our build, and each has a mitigation:
| Paper Limitation | Our Mitigation |
|---|---|
| Memory retrieval failures (agent "forgets" things) | Live sport provides structured event data (goals, cards, stats) as explicit memory injection points (reducing reliance on unguided retrieval) |
| Hallucination/embellishment of memories | Real-time verified sports data feeds (Opta, Stats Perform) constrain what the agent can "remember" (facts are ground-truthed) |
| Overly formal speech from LLM instruction tuning | Creator voice fine-tuning on consented training data ensures authentic register, not generic LLM formality |
| High inference cost | Cloud inference costs have fallen ~90% since GPT-3.5 (the paper's model); production-grade economics are now viable |
| Long-term planning coherence | Sports provide natural temporal structure (half, match, gameweek, season) that scaffolds coherent long-term planning |
The Ethical Framework We're Building In From Day One
The Park et al. paper itself identifies risks that any commercialisation must address:
These agents should be tuned to mitigate the risk of users forming parasocial relationships, logged to mitigate risks stemming from deepfakes and tailored persuasion, and applied in ways that complement rather than replace human stakeholders in design processes.
We take this seriously (not because regulators require it, but because the creators who sign with us need to trust us with their identity, and audiences need to understand what they're watching). Our consent-first architecture is a direct response. The SAG-AFTRA digital replica framework and the advancing NO FAKES Act establish the legal infrastructure; we build the commercial and technical consent layer on top of it, turning an industry-wide compliance challenge into a proprietary moat.
We are not building deepfakes. We are building consented, compensated, creator-owned digital twins (and the infrastructure that makes that commercially and legally sound at scale).
The Case We're Making
We have spent our careers at Meta and Improbable building the infrastructure for live virtual worlds, AI-driven characters, and systems that handle 10,000+ concurrent users. We have watched AI research and product timelines converge. And we have now seen a peer-reviewed paper prove, in a controlled experiment, that multi-agent generative AI systems produce genuinely emergent, believable behaviour (rated more authentic than humans performing the same role).
The five technical and market blockers that made this impossible before 2025 have each resolved in the last 18 months. The window is defined by a once-every-four-years event arriving in under 12 months.
This is the company we have been building toward. We're building it now.
Duncan Lomax & Naveen Bhati
Co-founders, Incandenza
incandenza.studio
References
- Generative Agents: Interactive Simulacra of Human Behavior ar5iv
- Generative Agents: Interactive Simulacra of Human Behavior
- Generative Agents: Interactive Simulacra of Human Behavior (PDF)
- Generative Agents: Interactive Simulacra of Human Behavior Reddit
- Generative Agents in Smallville - Emergent Mind
- Computational Agents Exhibit Believable Humanlike Behavior
- Global Live Entertainment Market Size, Share 2025 - 2034
- AI In Media & Entertainment Market Size, Share Report, 2030
- Interactive Streaming Market Size to Attain USD 284.18 Bn by 2034
- The creator economy could approach half-a-trillion dollars by 2027
- Why Multi-Agent Systems Need Real-Time Context in 2026 - Solace
- Scaling AI Agents: Best Practices for Multi-Bot Deployment
- Iconic raises $13M seed to build AI-native, voice-driven games
- Will AI Replace UGC
- BleacherBot: AI Agent as a Sports Co-Viewing Partner
- AI Agents in Sports Broadcasting: Powerful Upside