
The purpose of this archive is to document synthetic violence as a form of image-based violence, and as an extension of militarization, political violence, dehumanization, and genocide.
The first archive focuses on anti-Palestinian bias in popular generative AI media and models. This includes both the biases present within foundation models and the ways these technologies have been deployed — across popular media, platforms, and channels, as well as by governmental and state bodies.
By naming and curating these examples, the archive aims to reveal how image-making technologies can become deeply embedded in political violence and dehumanization — with harrowing effects — rather than to retraumatize by archiving.
The archive seeks to identify and label the specific harms, dangers, and recurring tropes that generative AI produces, as well as inherits and amplifies from traditional media. The purpose is to resist and de-normalise image violence.
Content warnings have been placed on images that contain especially difficult or distressing material.
This is a participatory archive, and users are invited to contribute through the website, in the hope of bringing more awareness, accountability, and an end to image-making technologies that support dehumanisation, political violence, and genocide.