Press "Enter" to skip to content

The Story Behind Google DeepMind’s First AI Film

One night during the pandemic, animator Connie He found herself lying awake, listening to the strange symphony of apartment life: footsteps in stilettos at 5 a.m. and other mysterious thuds and echoes of neighbors living their lives just above her ceiling.

Out of that sleepless frustration came the seed of what would eventually become “Dear Upstairs Neighbors,” a 6-minute animated film based on her experience. It tells the story of an apartment dweller named Ada who tosses and turns in bed as noises from her neighbors torture her awake.

“I’m waking up from all of these noises,” He recalled, in an interview with The AI Innovator. “I had a lot of anger and I had to let it out in a healthy way. So I put it down on storyboards.”

While at Pixar, she met her film’s producer, Marcia Mayer. At the time, generative AI was hitting the market with Stable Diffusion, DALL-E and other AI tools generating drawings that “freaked out” creators for their uncanny abilities, He said. But the two decided to experiment with generative AI and animation, since Pixar is known for being at the forefront of art and technology innovation.

The resulting short film, directed by He, may mark a turning point in how movies themselves are made. It blends traditional animation craft with cutting-edge generative AI developed at Google DeepMind, where she now works as a film director and technical artist. Mayer also moved to Google DeepMind.

The film looks less like conventional computer animation and more like a “living painting,” a swirling, expressionistic world where brushstrokes seem to move with the characters. Ada’s outsized reactions, depicted in psychedelic-like digital illustrations, both exemplify and exaggerate her grievances against the unidentified neighbors. The film debuted in January at the Sundance Film Festival.

Google DeepMind said the film’s expressionistic style was “extremely difficult” to craft in traditional animation. Even AI needed help. “We expected that AI could help fill the gap, but soon found that these styles were so unique, and our design choices so specific, that our researchers would have to develop new capabilities to provide the customization and control that we needed to bring the film to life,” according to a Google blog post.

Researchers built tools that let artists teach Google’s video model Veo and image model Imagen new visual concepts from just a few example images, the company said.

The goal of an AI company diving into filmmaking? “We’re excited to showcase how creatives can use AI to supplement their creativity, not replace it,” Google DeepMind said in a LinkedIn post.

No prompts allowed

He said the film’s style wasn’t generated from text prompts. It began the old-fashioned way – with artists drawing and painting. “A lot of people use text prompts to design a character, to design concept art,” He said. “We’re like, ‘no, we don’t want to do that at all. We want artists to create the artwork, to design the characters, for animators to animate the character and show their life because these are things that you can’t just use words to create.'”

The production pipeline still resembled a traditional animated film. Storyboards were drawn by hand. Characters were designed and animated by artists. But once those elements were created, the team used Veo and Imagen to apply a custom visual style across thousands of frames.

Instead of rendering scenes through conventional CGI lighting and shading systems, the AI helped transform the animation into something that looks hand-painted. The system learned from artwork produced by the team and reproduced those textures and colors consistently throughout the film.

In effect, the technology became a kind of visual translator. The model even picked up subtle details the artists themselves hadn’t consciously encoded – such as the two-point perspective embedded in their concept art. The project required a hybrid team of about 45 people, including animators, engineers and researchers.

That collaboration echoes a long tradition in animation. Mayer noted that short films have historically served as experimental laboratories for new technologies. Pixar, for instance, used its early shorts to pioneer techniques such as realistic cloth simulation and rain effects. “Short films in animation have been how all these new technologies and looks have been developed because it’s just safer to take those risks in a short film.”

“Dear Upstairs Neighbors” follows that tradition, but with a twist: the technology being explored is not just a rendering technique or physics simulation. It is generative AI – a tool that has triggered anxiety throughout the creative community.

That tension is never far from the film’s reception. He says professional animators have largely reacted with curiosity, recognizing the care taken to preserve artistic control. But younger artists – especially students – often express fear that AI could make their skills obsolete.

“I’ve seen a lot of teenagers who love art themselves but they also feel overwhelmed by AI and they just give me a lot of hater comments,” He said. “But then, I relate to them. I feel their pain because if I’m in middle school or high school … and one day there’s this system that draws better than me.”

He’s advice to those artists is surprisingly simple: Be true to yourself and keep creating. “Every brush stroke, every word you put down is still you,” she said. “So let’s not be afraid of AI. … Understand what it can do, what it cannot do – and then ask what is special about us” as humans.

Learning from AI

During production, the relationship between artist and machine often resembled an improvisational dialogue. Sometimes the AI produced unexpected imagery that the filmmakers embraced. Other times the team had to rein it in to preserve the narrative.

“It happens both ways all the time,” He said. “Sometimes it can give you something surprising you didn’t expect … ‘This is good, let me go with it. If it’s not good, I can also change it to something else.'”

The entire process took roughly three years, including the long and iterative storyboard development typical of animated storytelling.

For Mayer, the experience reinforces a belief that generative AI will ultimately function as just another creative tool. “I don’t think authorship changes at all in the age of generative AI,” she said. “The act of creation is in itself maybe the most human act of what we do.”

Whether the broader creative community will embrace that view remains an open question. Generative AI is already transforming fields from illustration to music composition. But many artists fear that systems trained on massive datasets of existing art could commoditize creative work.

He acknowledges the uncertainty. She sees the arrival of AI not as the end of creativity, but as a provocation – a moment that forces society to ask deeper questions about what creativity really is.

“The day AI came out, it forced us to think, ‘What are we?'” He said. “I hope we can keep exploring the humanity part.”

Author

×