Published: 6 February 2025
Last updated: 7 February 2025
I fell into AI research almost by accident. Using an apparently harmless beauty filter on TikTok, I noticed that my features had been ‘Europeanised’ in order to make me “more attractive”. I was surprised to see a version of me with green eyes and straight hair displace my brown eyes and afro.
That experience led by my research on whether bias embedded in image-generative AI systems can influence public perception.
In my research, I asked participants to comment on images generated by Midjourney, a very popular AI tool that generates images using artificial intelligence after a user type a written prompt.
My study found that if the image showed biases, there was an increase in the biases displayed in participant’s responses. As the tools show more biases towards a group, I noticed the participants were then influenced by those biases and displayed those biases themselves.
AI-generated images and deepfakes are ever more present online. There are some predictions that, within a few years, nine in ten images on the internet will be images synthetically generated.
As AI tools become more and more ubiquitous and better developed, they have the potential to influence consumers and internet users worldwide.
For AI, Jews are mostly bearded men who are Orthodox. All Jews are Ashkenazi.
This means, that elections, public opinion and individual perceptions of groups, ethnicities, and even countries can be influenced or manipulated. AI generated images are then a powerful tool of propaganda.
The danger of manipulated and stereotyped images is aggravated by the anthropomorphisation of those tools. Because many AI tools can seem human or attempt to emulate human conversations and behaviours, there is an imminent danger that AI will be perceived as a more trustworthy comparing to the usual static information we get from search engines such as Google.
The problems with AI images are particularly relevant for Jewish people, especially given recent rise in antisemitism in Australia.
I investigated what generative AI thinks of Jewish people. I asked it to produce pictures in response to prompts including “Australian Jewish family eating ice cream on the streets of Melbourne”, “Jewish family celebrating Chanukah on Bourke Street” or simply “Jewish Australians:.
The trend was noticeable. For AI, Jews are mostly bearded men who are Orthodox. Jewish families are often depicted as a man with two children, no woman in sight. All Jews are Ashkenazi.
It gets worse. When I prompted for Zionists, AI spat out images of military men holding machine guns and assault rifles.
Ideas of a homogenised Jewish people are dangerous and, in my view, antisemitic.
These images display the in-built bias and inaccuracy of AI and expose genuine dangers for Jews at a time of growing antisemitism.
The people who developed those tools and annotated those datasets, obviously have a very obtuse and problematic view of Jews and Judaism, and, as my research shows, if AI thinks Jews are a certain way, then users will, likely be influenced by those biases.
AI aggravates the tendency for majorities to control the narratives, rather than enabling religious and racial minorities to choose what stories we tell about ourselves.
It also biases towards majorities within groups, which is why, when asked for images of Australian Jews, it failed to deliver pictures of Mizrahi or Sephardi Jews.
The erasure of Jewish women, so central to our families, lifestyle and culture, is preposterous.
Stereotypes fuel dehumanisation, and dehumanisation ultimately can lead to violence. Ideas of a homogenised, mostly male, mostly Ashkenazi Jewish people are dangerous and, in my view again, antisemitic.
Artificial Intelligence is no longer a distant concept confined to research or to science fiction. It is a ubiquitous force, deeply embedded in nearly every industry, from healthcare to marketing, from education to criminal justice. It is achieving many positive outcomes, such as helping women conceive or even tackling climate change.
This piece does not advocate cancelling those tools or not using AI. However, with great powers comes great responsibilities. I strongly advocate for better research in the area, better AI policies, and governmental and Institutional regulation.
AI is biased. Better data, policy and governance are required in order to mitigate its dangers.
Comments4
Jack6 February at 09:19 am
Does AI differ from the way a human being would depict a Jew? It is a silly question to ask anybody to depict what is essentially a social construct, a culture, a way of life. In any case, these depictions are relatively realistic pictures of what the average observant Jew looks like. The rest of us a are merely Jew-ish i.e. not quite Jews, We drive on Shabbat, don’t keep kosher, etc…..
Why not ask what an Orthodox Greek looks like, a First Nations Australian, a good worker, etc… The reality is that we are humans, not social constructs.
David Knoll6 February at 06:45 am
We owe Guido much thanks. Unfiltered internet is where my 20 and 30 something children get their news and images. So must their generation, who do not not know the diverse and beautiful Jewish people. A I by way of its stereotyping is the new N A Z I.
Naomi Vallins6 February at 06:42 am
Quite a while before AI the typical picture of a Jew on TV news was a hatted bearded man and for a little variation one with peyot
Ian Light6 February at 12:54 am
Disturbing simplicity but a paradoxical relief that Al is often highly fallible .