• doug-shapiro-speaker-ai-summit-2026-900x600.jpg
Member News March 13, 2026

Third Annual AI Summit Puts Creators Front and Center

Use of generative artificial intelligence by entertainment industry creatives was the focus at the event, which offered two tracks of programming for the first time.

Word of the Day at the Television Academy’s third annual AI Summit: skeuomorphism.

As defined by the summit’s keynote speaker, Doug Shapiro — a media strategist-advisor and author of The Mediator on Substack — skeuomorphism is the concept that a new medium at first typically imitates the medium that preceded it. An online magazine was initially simply a static version of its published predecessor, for instance, rather than today’s scrollable screen with hypertext links and embedded video.

The entertainment industry, Shapiro added, has been talking about generative artificial intelligence in a skeuomorphic way. But like any new medium, GenAI will enable new applications.

"We don't know what the ‘neumorphic’ applications are going to be," he said. "What happens is that the creatives figure out the unique attributes of the medium and figure out how to do new stuff with it."

Those attributes include being inexpensive, fast and unconstrained by physics. There’s also the capability, though not unique to GenAI, of turning one format into multiple modalities: one video can become a film, television show, podcast and slide show.

While much is still unknown, such as the future state of the workflow or what labor replacements may occur, “I would argue that anything with an underlying humanity will not be vulnerable, or appropriate for [Gen AI],” Shapiro remarked. “I don't think anyone wants to watch AI's Got Talent.”

The utilization of generative artificial intelligence by entertainment industry creatives was the focus of the AI Summit, held March 7, 2026 at the Academy’s Saban Media Center in the NoHo Arts District. With a goal to inform and inspire, that mission was firmly accomplished.

For the first time, the summit offered two tracks of programming. The mainstage Wolf Theatre on the Saban first floor presented four panels on topics of particular interest from morning to early afternoon.

Upstairs, the Governors Room was divided into two sides for workshops. Beginning late morning for several hours, one side presented the continuous interactive DeConstructed GenJam by Machine Cinema, an immersive experience where participants created short videos, using AI to craft characters, build worlds and do postproduction.

The other side hosted small gatherings covering AI in the production pipeline, assistive AI and AI in music; it also included FAQ sessions for the documentary and reality programming, performers and professional representatives peer groups. There was also time for lunch and networking.

The event was planned and presented by the Academy’s Innovation Advisory committee, co-chaired by emerging media programming peer group governor Eric Shamlin and stunts peer group governor Eddie Perez.

Eric Shamlin, founder-CEO of the AI-forward FireBringer Media Group

Photo Credit: Television Academy

Shamlin, founder-CEO of the AI-forward FireBringer Media Group, welcomed Summit attendees and noted that last year, “much of the conversation around AI was dominated by fear and protest, and frankly, that reaction made sense. But over the past year, the tone of the conversation has begun to change. Increasingly, the conversation is moving towards something else: cautious curiosity. More creators are experimenting. More people are trying to understand how these tools are actually behaving. And more of us are realizing that playing an active role in shaping how these technologies influence our careers may be a far stronger position than simply trying to avoid them. Because whether we like it or not, these tools are no longer theoretical. They are already here, already improving and already entering workloads across nearly every sector of our economy.”

Accordingly, the Summit focused on dynamic practical matters, eschewing platforms, product launches and marketing. “Everyone speaking here today is a maker, an artist, a filmmaker, a technologist — people who are actually experimenting with these tools in real creative workflows,” Shamlin said.

Shapiro’s keynote followed Shamlin’s remarks; he noted that “the last 15 to 20 years of media were defined by the disruption of content distribution. I think the next decade is going to be defined by the disruption of content creation.” That creation will occur within the context of lower costs and a new viewer definition of quality, including not only the long-held standard represented by HBO shows — high production values, high budgets, brand-new showrunners, recognizable talent — but also, “Is it authentic? Is it relatable, digestible? Is it relevant to my sub-community?”

Now more than ever, content will include what Shapiro called “complements,” products and services spun off from their IP that can be monetized, such as subscriptions, events and toys.

“The business is going to get more competitive,” he concluded. “We're going to have more content competing for a finite amount of demand. It will get more complex. There are going to be opportunities for the people and the enterprises that position themselves correctly. And how do you do that? You educate yourself, as you are today; you experiment, and you lean into what's scarce.”

The first panel, “Updates in AI and the Law: Litigation, Legislation and Deal Making,” covered the ever-changing legal picture involving AI in media and entertainment. Moderator Holly Leff-Pressman, chief client engagement officer, Screen Engine/ASI, described several significant developments in the current legal landscape.

Among them: an explosion of litigation, with more than 60 AI-related active lawsuits, involving training data, outputs, video, music and the cloud; licensing as the goal; White House threats to creator protections and also to state laws governing AI; and SAG-AFTRA negotiations, with AI, as always, a top concern.

Panelist Timothy Ursprung, policy advisor at the Academy’s law firm, Venable LLP, explained the White House threat: a December 11 executive order that establishes an AI Litigation Task Force to challenge state laws regulating AI use. There will likely be lawsuits filed, and Congressional Democrats have formed their own commission, which they hope leads to bipartisan cooperation. Supporting documents to the executive order, at least, have noted the need for copyright protection.

Jonathan Handel, an entertainment and technology attorney at Feig/Finkel discussed negotiations with entertainment unions; SAG-AFTRA has started the process, with the Writers Guild and Directors Guild to come.

The SAG-AFTRA commercials contract prohibits use of synthetic performers to save money. The verticals/microdrama agreement, which SAG-AFTRA wrote itself with no negotiations, also prohibits synthetic performers, without bargaining and permission from the union, as well as using audio or video content for AI training. Handel doesn’t believe there will be any strikes, though if there is one, it would likely be by the WGA.

Peter Csathy, CEO of Creative Media, talked about licensing models for training IP, among them: the newer usage-based model where aspects of IP training data can be tracked and attributed as significant contributors to AI-generated content, for continuing royalties, revenue sharing and a one-time licensing payment. People with a fan following can license and control their personas. And industry companies and individuals are forming strategic partnerships with AI companies. A recent notable example: Disney has teamed with OpenAI to license IP for fan use — though not for training — generating a new revenue stream.

Summed up Leff-Pressman, “The companies that thrive will really be the ones that aren't waiting for the rules but are actually helping create them.”

Christina Lee Storm, governor of the emerging media programming peer group and cochair of the Advocacy committee, moderates the "Professional AI: Key Considerations, Creators Rights and Best Practices Television Professionals Can’t Ignore" panel

Photo Credit: Television Academy

The next panel, “Professional AI: Key Considerations, Creators Rights and Best Practices Television Professionals Can’t Ignore,” brought together industry members who, as moderator Christina Lee Storm noted, “helped pioneer tools and guidance with GenAI for industry professionals. As generative AI tools enter film and television workflows, [we] are being asked to make decisions that affect creative integrity, performer protections, disclosure and long-term project viability.”

Storm herself set in motion the creation of key guidelines for ethical AI use for Academy members; she is a governor of the emerging media programming peer group and cochair of the Advocacy committee, cofounder of Playback PBLK and head of studio — narrative, at AI-native studio Secret Level.

Voice actor Tim Friedlander cofounded the National Association of Voice Actors (NAVA) so that both union and non-union actors could have protections regarding technology; he is NAVA president, as well as the cofounder of Creators Coalition on AI (CCAI). His recent legislation advocacy in Washington, D.C., is for the No Fakes Act, which would provide intellectual property rights for voice, image, name and likeness.

“Many of us, as performers, are not copyright holders[for] the work we do,” he said. With the act limiting the usage of a digital replica to 10 years, NAVA is advising performers not to sign licensing deals until legislation and lawsuits play out, so as not to sign away their rights in perpetuity.

Former Producers Guild of America president Lori McCreary, the CEO of Revelations Entertainment and Morgan Freeman’s business partner of 30 years, described the level of tracking for production that she must now do related to AI, to ensure that she owns the right to use any aspect — such as material from a sizzle reel, or design elements — that might have been generated by AI.

For a production script, McCreary must ascertain, “Did [the writer] get any research from any of the AI systems? What did they use to write it? Did they write any scenes with AI? I have to be able to track it enough to have a clear chain of title,” she said. “We have to track every piece of the digital DNA of a project from the beginning all the way to distribution, and make sure.”

Filmmaker Eugen Braeunig of the Archival Producers Alliance (APA) discussed the ethical considerations and need for transparency when using synthetic or AI-generated elements in documentaries, so as not to break the contract of trust the audience has with the production team.

Those elements could include “resurrecting the voice of a deceased subject, de-aging one of your characters, an AI dialogue,” he said. “Especially when we’re looking at historical imagery, there’s a fine line between light restoration of historic imagery versus recreating something that was either lost or never recorded in the first place. If your audience is seeing it on screen, unless there’s some sort of disclosure, they’re going to assume that this is authentic material, and they could walk away with some really wrong conclusions.”

The third panel, “Virtual Production and AI: Lights! Camera! VidViz!”, featured Chris Nichols and Daniel Thron, cofounders of Monstrous Moonshine; Nichols is also director of special projects at Chaos Innovation Lab and cohost of the CG Garage podcast. Crafts and entertainment technology journalist Carolyn Giardina moderated.

The two introduced their innovative concept “VidViz,” or video visualization, a previsualization technique that combines real actors with CG elements in real time. Tracing the steps that resulted in VidViz, Nichols talked about ray tracing, a higher level of CG that more accurately simulates the effects of rays of light but is a slow process; Chaos was able to speed it up.

When Nichols first showed him VidViz, Thron recalled, “I was like, ‘So we can just shoot scenes. We can go on a blue screen, work out how we want to shoot this and we can rehearse with the actors. And we can make the scene tighter and better — super cheap.”

In making a short film, he discovered, “I’m in the lens with Richard [Crudo, ASC, the director of photography]. We’re lining up shots in the lens — I’m seeing that instead of a blue screen; I’m seeing the world that this is taking place in. And we can frame shots correctly.

“These tools are so flexible,” he added. “Put them together, save yourself time and money, and you will produce great art. Because the great art comes from you, not from the machine.”

Christina Lee Storm

Photo Credit: Television Academy

The final Wolf Theatre panel, “AI and the Future of Music Making,” featured four music producers and artists. The moderator was Shira Lazar, founder of the online show What’s Trending and initiative Creators4Mental Health.

Producer-songwriter-composer Roahn Hylton did a live demo utilizing music from the finale of the Peacock series, Bel-Air, which he scored. In the scene used for his demo, the main characters leaving for college was a metaphor for the ending of the series. He played and recorded a cue from the theme music, then uploaded it to audio platform Suno, hit Create and played back four different styles and layerings of prompts.

With more time in real life, he said, “I would take each individual piece, and then I would embellish on it, because as much as it’s fun to create with AI, I didn’t come here to let somebody else do my work for me. So, I’m going to want to make the last output feeling sound like me.”

Also on the panel was electronic music producer Ben Cantil, cofounder and chief technology officer of DataMind Audio, a music tech startup company that builds AI-powered instruments for artists and sound designers. The company’s first plug-in was Combobulator, which transforms a live audio signal into an AI-generated variation in real time.

More recent is the Concantenator: The user loads samples and recordings, such as industrial sounds or chainsaws, then vocalizes into a microphone, with the output recreating their voice as those sounds. It’s as responsive and instantaneous as a live instrument, Cantil noted.

“Music is cultural memory. It is an embodiment of emotion. It is both the labor and fruits of a uniquely human process and response to the world,” he said. “I look at generative AI and AI in general, and I see the beginning of a creative process and the augmentation of human creativity, not its end. So, when AI enters the bloodstream of music and sound creation, I’m not so much interested in a prompt replacing my creative control. I’m primarily interested in exploring the new esthetics and the creative potential that it brings us.”

Cantil believes that artists should take control at all points of the AI process, such as training their own models. However musicians use AI, panelist Jidenna, a rapper-singer-songwriter, closed the discussion with a reminder of the element at the heart of all creativity. “Music is not listened to anymore. It’s watched,” he said. “So how do you make money, make people watch your music? That comes back to story.”

A full list of speakers is available here.