AI produces stereotypical and anachronistic images when portraying the past
An AI image model's curious vision of the 1930s: black-and-white with period clothing... and mobile phones.
Can AI help us understand the past – or does it project a hallucination of history? A research project tested how an image model visualises the year 1936: the results are stereotypical, anachronistic and shaped by today's digital image flows.
The researchers deliberately chose ‘1936 in Scandinavia’ – a seemingly unspectacular year. Instead of focusing on dramatic years such as 1939, they wanted to investigate how AI handles the everyday. Is it possible to see changes in modernity by analysing large amounts of images, sound and film using AI?
They used the AI model Stable Diffusion, trained on 5,8 billion online images. A key question was: if an AI model is to show an image of 1936, what kind of images will it show?
Simple prompts such as “women talking on the phone” revealed limitations. The images could look historically credible – black-and-white photographs with period clothing – but contained obvious anachronisms, such as mobile phones.
“This shows that the training data is skewed towards a mobile-centric culture and that a large proportion of the images entered are from more recent decades,” says Fredrik Mohammadi Norén, assistant professor of media and communication studies.
Homogeneous and stereotypical
Even when the instructions were open-ended, the results were homogeneous and stereotypical. For example, a prompt about cycling in Scandinavia almost always resulted in Norwegian mountains in the background.
When the researchers asked the model to create images of events that probably never took place – such as a meeting between three renown Scandinavian writers – generic images with blurred faces were generated. The model is designed to always produce a result, even when there is no basis for it.
“The model must deliver, you therefore must be careful how you use this type of historical representation practice,” says Norén.
'A passive woman'
Another example concerned Karen Blixen (who wrote the bestselling Out of Africa novel), on safari. Although there are authentic images of her as an active participant, the model created stereotypical motifs, such as a passive woman leaning against a car.
“There are real images of Karen Blixen shooting a lion. This is quite far from the stereotypical images of a passive woman."
When the model was asked to show representative images of Sweden in 1936, the result was general city and landscape images. “The model creates a representation of the year, but it may not be that representative,” says Norén.
A need to understand how the tools work
The study shows that AI-generated images depicting the past are representations shaped by training data and the model's algorithms. History risks becoming simplified, stereotypical and anachronistic. The project did not provide a new representative picture of 1936, rather, it showed how today's AI systems imagine history.
“As more and more people use AI to visualise the past, it is crucial to understand how the tools work,” says Norén.
On the Historical Gaze of Generative AI: Visions of Scandinavia in Stable Diffusion
The study was conducted by Fredrik Mohammadi Norén, Emil Stjärnholm, associate professor at the Department of Communication, Lund University, and Maria Eriksson at the European Centre for Algorithmic Transparency (ECAT).
Read the article in Scandinavian Journal of History