I recently attended a panel at New York Advertising Week exploring the role of AI in creative development. Coincidentally, that same week the Ad Council and Leo Burnett also launched our first AI-driven creative on behalf of Feeding America. The spot features a “virtual visage” inspired by the faces of over 1,000 Americans afflicted by hunger. The panel discussion raised interesting questions around the ethics of generating “synthetic media” (algorithmic created or modified media), whether doing this is new, and if it’s better, worse or just different than human output.
What is “real?”
The form of “synthetic media” dominating news headlines are “deep fakes,” or when images or video are altered by AI often for nefarious purposes. But even more benign synthetic media like art, music or virtual celebrities like Lil Miquela (pictured above) or Yona that are generated by AI raise ethical questions. They put the onus on consumers to have to verify whether or not what they are seeing or engaging with is real. And for AI-generated media with multiple human inputs, there are new questions around ownership. An AI-generated piece of artwork sold at Christies for $432K. While it sold more because of the novelty of the medium, it raises questions such as who owns the code as opposed to who publishes the code that generates this type of artwork.
There’s no such thing as “new”
I grew up when Max Headroom was a thing. I mean he doesn’t look as polished as Lil Miquela, but graphics have come a long way since the 1980s. One point panelists seem to agree on is that the idea of “synthetic” or enhanced media isn’t really new. Panelists pointed to examples such as airbrushing models for magazines or using filters on Instagram or Snapchat as already having been “part of the game.” They argued AI-generated media is more an evolution of what’s been done versus some sort of “Terminator” that’s new and scary.
“Nothing will replace Prince singing Purple Rain in the rain at the Super Bowl”
Much of the discussion revolved around whether AI could actually replace human creativity. The panel featured Amper Music, a company that uses musicologists to program datasets to generate music for different purposes, such as music optimized for running. Lara Fitch, Amper’s Head of Product, cited an interview with artist Nick Cave who argued that AI could generate good but not great music because it lacks the feelings – pain and passion – that make music great. But with human creators teaching the data their own pain and passion, we don’t know whether AI-generated music could become great in the future.
Currently, I’m in Nick Cave’s camp. AI-generated influencers, pop stars and music are all interesting, and I’m glad we are experimenting with the technology to break through the clutter and raise awareness around an issue like hunger. But they don’t replace interacting with real humans or the beauty that comes from all the messy emotions that go with our humanity – at least not yet.