Tilly Norwood smiles, blinks, and speaks with the confidence of a young actor on a press tour. She looks natural on screen, responds in real time and even gives interviews. Yet she doesn’t exist. Tilly is an AI-generated performer created by Particle6 Productions. Her creators say talent agencies have already approached them about representation.
Synthetic performers are not new. When Tomb Raider was released in 1996, Lara Croft was a blocky 3D model with limited movement. Yet she quickly fronted global advertising campaigns and became one of the most recognisable characters in gaming. Her popularity led to two Hollywood films, where she was played by a real actor, Angelina Jolie, a reminder that even the earliest digital icons still relied on human craft to bring them fully to life.

A decade later, Japan’s holographic singer Hatsune Miku took the concept further. Her computer-generated voice and projected image filled concert halls, proving that audiences would pay to see a performer who exists only as data.
By the late 2010s, virtual influencers such as Lil Miquela blurred the line between fiction and celebrity. She modelled for major fashion brands and attracted millions of followers online, functioning as a media personality in her own right.
Today, tools such as Unreal Engine’s MetaHuman Creator allow almost anyone to design, animate, and direct photorealistic digital humans in real time. Tilly Norwood is the logical successor to this lineage — a performer built entirely from code, capable of appearing anywhere and never ageing or complaining about long shoots.
The technology behind her is accelerating fast. OpenAI’s Sora 2 can generate short, photorealistic videos from simple text prompts. The results are impressive but unsettling. We’ve already seen fake celebrity interviews, AI-generated news footage, and uncanny echoes of copyrighted films. These tools draw on vast libraries of online video to learn their craft. That raises questions about ownership, consent, and the risk of machines reproducing, or replacing, real human performances.
It’s tempting to see this as an existential threat. After all, entry-level roles — rotoscope artists, crowd extras, runners — may be the first to disappear. Yet film has survived every previous disruption by adapting its craft.

When sound arrived in the late 1920s, many predicted the death of silent cinema. Some actors couldn’t make the transition, but others, like Chaplin and Garbo, reinvented themselves. Hitchcock, midway through the production of a silent movie, turned new sound capability into a storytelling device; his Blackmail (1929) used a single repeated word, “knife”, to capture psychological tension in the film’s famous breakfast scene.
When colour became mainstream, cinematographers such as Jack Cardiff transformed it into poetry. When digital replaced film stock, directors like David Fincher and the Coen brothers used it to explore light and precision in ways that were previously impossible.
Computer-generated imagery was met with the same scepticism. Yet without it, we would never have Toy Story, Jurassic Park, or the worlds built by Peter Jackson and James Cameron. Each leap was driven by artists, not by machines.
AI, I’m sure, will follow the same pattern. The tools are extraordinary, but they still need human judgement. An algorithm can learn the framing of Kubrick or the pacing of Nolan, but it can’t decide why those choices matter. It can reproduce the shape of a story, but not the soul of one.
The future of film will depend on how well we integrate these new systems into practice. Already, AI is being used to clean up dialogue, restore old footage, and generate quick pre-visualisations for complex scenes. It can help small studios experiment with ideas before committing to expensive shoots. It can give directors new ways to test movement, lighting, and mood.
Used responsibly, AI could make the industry more accessible and sustainable. Used carelessly, it could flood the world with cheap imitation.
Ownership will remain central. Digital actors like Tilly Norwood belong to their creators, not to the software companies that helped build them. At least for now. The unresolved question is how AI developers might seek to claim a share in future creative profits. Meanwhile, the far bigger concern is the use of copyrighted material in training datasets. Filmmakers’ work has fuelled these systems without consent or compensation, and that needs to change.
Yet we should be cautious about declaring the death of the industry. Filmmaking is an ecosystem of skill: directing, cinematography, editing, sound, costume, design. These roles may evolve, but they will not vanish. What will matter most is the ability to work with, not against, the new tools.
AI will be most powerful in the hands of the skilled. The best directors will use it to explore visual ideas faster. Great actors will use digital doubles to extend their range, as Carrie Fisher and Paul Walker’s posthumous performances already demonstrated. Editors will use AI to analyse footage but still make the human decisions about rhythm and meaning that no machine can match.
The story of film has always been one of adaptation. Méliès used stop tricks to make magic. Kubrick used NASA lenses to film candlelight. Pixar turned code into emotion. Each generation has redefined the craft with the tools available. AI will be no different.
The danger is not that machines will make films. It’s that we forget why we make them. Filmmaking has never been about generating images. It’s about understanding people — what they feel, fear, and hope for.
Play with Learning is a creative media service led by Carlton Reeve.