AI-Generated Images Lead to Audiences’ Distrust, Threaten Documentary

Few issues are as widely dissected and discussed within the film business today as the possibilities and consequences of AI technology. When it comes to documentary filmmaking, the discussion takes on a new level of importance, as the form is often tied to journalistic notions of truth and reality. Leading documentarians gathered at this year’s International Documentary Film Festival Amsterdam (IDFA), the world’s largest festival for docs, to discuss best practices, pressing warnings and what the future may look like for documentaries as AI use becomes more and more widespread.
Oscar-nominated U.S. director and investigative reporter David France, at IDFA with Sundance-breakout doc “Free Leonard Peltier,” recalled first working with artificial intelligence for 2020’s “Welcome to Chechnya.” The film, which chronicled the persecution of members of the LGBTQIA community in the semi-independent autonomous republic of Russia, dealt with an extremely sensitive situation. To be able to speak to those being persecuted, France had to ensure their identities wouldn’t be revealed. “It was a story that needed to be told, but one that was hard to tell because the people who were able to get out were being chased around the globe.”
The solution France and his team landed on was to obscure their characters’ faces by digitally superimposing other people onto them. “It changed nothing about their micro responses, their emotions. You could see the original person crying and laughing while using somebody else’s face. We weren’t calling it artificial intelligence at the time [2019]. We were calling it machine learning. It just seemed remarkable.”
The director recruited 23 queer activists from New York to lend their faces and voices to the project in a groundbreaking process that granted a technical Oscar to the team behind the innovation. Still, the filmmaker was scrutinized for his use of AI. “While we were doing this, everybody was calling it deep fake. We kept saying: It’s not deep fake. Deep fake is the crime, AI is the tool.”
France would use AI again in his films, including his latest, telling the story of Peltier, an activist jailed for half a century after a disputed conviction. In this case, the director used AI to modify — and rejuvenate — Peltier’s voice. The resource was needed because the recordings heard in the film were obtained illicitly, since Peltier could not speak to journalists from prison. “In addition to that, Leonard went from being a 30-year-old to an 80-year-old, and you could hear the age in his voice.”
British filmmaker Marc Isaacs is at IDFA with “Synthetic Sincerity,” which sees him make a deal with the titular lab to assist in their research on the possibility of teaching AI characters authenticity. A blend of fact and fiction, the doc is created in collaboration with Romanian actress Ilinca Manolache (“Do Not Expect Too Much From the End of the World”), with Isaacs cleverly manipulating images through filters and other techniques to emulate what AI-generated sequences would look like.
Isaacs experimented with Synthesia, a synthetic media generation company that develops software used to create AI-generated video content. “You choose a character and can type in [things] for them to say,” he explained. It was interesting at first, but the director said the AI character “bored him to death” because her “range of emotions was really limited.” “It was funny at first, but it became tiresome very quickly.” When he met Manolache at a Bucharest festival, he proposed they work together on the film, collaborating to create the half-real-half-digital character she plays on screen. “She’s much more interesting, and what we could do with her was more varied. Most actors are terrified of having their personas turned into AI, but she loved it.”
The director did not want to make clear what is AI-generated and what isn’t in the film, pointing out his work is “not journalistic.” “The whole point of the film is to raise questions about images and what’s happening to representation and the death of the camera. I didn’t want to spoil that by labelling things.”
A great portion of the conversation was dedicated to understanding the impact of AI when it comes to archival footage. “For archives, the consequences are quite profound,” said Portuguese filmmaker Susana de Sousa Dias (“Fordlândia Panacea”), this year’s Guest of Honor at IDFA and a documentarian largely working with archival images. “The documentary status of images becomes much easier to context. There is a risk here that not only can spectators believe fake archival footage, but that people will stop believing anything. In both cases, our regime of truth is completely shaken.”
“Since the transition to digital media, the discourse of the incompleteness of reality in black and white and low definition images has grown,” she added. The director also noted how working with archive is not only about what can be seen and rescued through research, but thinking about all the gaps in material and memory. “The question that interests me is actually very simple and at the same time very complex: what happens when a technology that wants to repair everything enters the field where absence itself is meaningful?”
Thinking about this contemporary conundrum, Emmy-winning filmmaker and graphic designer Eugen Bräunig (“Trafficked”) worked alongside the Archival Producers Alliance to establish a set of guidelines on the best practices when working with generative AI within archive-led filmmaking. “In documentary, there is no organizing body that tells us how to do things,” he emphasized. “There are no laws and rules. All we can do is self-regulate and impose certain standards on ourselves to hold ourselves accountable as storytellers, news-makers and image-makers.”
Bräunig pointed out that the most basic and yet most helpful thing is for productions to create a cue sheet listing the technology used, as well as how and when it was used throughout the making of the film. “At some point, people are going to have questions,” he warned. Best to stay ahead of the curve.
At one point in the conversation, the designer played a SORA-generated clip of an artificial 1990s news clip to illustrate how faithful AI-generated sequences currently are. “It was, of course, possible to make fake videos before, but big production money and a great deal of time were required. Now, it’s just too cheap and too fast,” he added.
“Trust in media is at an all-time low,” warned the filmmaker. “That means trust in archives is also threatened. If people start mistrusting news, which they already are, they are also going to potentially develop that sense of mistrust towards documentary filmmaking, and rightfully so in some instances.”




