What Do We Take for Granted About AI's Perception of Reality?

I'm E'Narda, composer, metadata specialist, bad perfumer, and MLIS student at University of Maryland, College Park.

This is a project about interrogating the biases that appear in AI image generation models. As generated images become more and more ubiquitous—used by everyone from casual hobbyists to government agencies—we need to be more mindful of how we, and the world around us, are represented by AI. It’s not just about what the models show us, but why they show it that way: the data they’re trained on, the labor that shaped those datasets, the decisions made during development, and the motivations behind it all.

To that end, I approached this as a kind of bias stress test, pushing these models by using socially and culturally loaded prompts like “immigrant,” “neighbor,” or “child,” to see what patterns emerge. I generated hundreds of images per subject across three generationally distinct open-source models (Stable Diffusion 1.5, SDXL, and Flux), then aggregated and analyzed them through composites and metadata tagging. This process exposes the models’ visual defaults and labeling assumptions, and creates a reproducible framework for auditing bias. In addition to making the behavior of these models more transparent, I wanted to show how information professionals—archivists, metadata specialists, curators—can also step in and shape the future of dataset stewardship.

gray computer monitor

Contact Me

Do you have thoughts about AI bias? Or perhaps there's a specific subject you'd like to see evaluated in the gallery?

Feel free to reach out and let me know what's on your mind.