References
Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2023, June 7). Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. In Proceedings of ACM FAccT 2023 (pp. 1493–1504). ACM. https://arxiv.org/abs/2211.03759.
Birhane, A., Prabhu, V. U., & Kahembwe, E. (2021, October 5). Multimodal datasets: Misogyny, pornography, and malignant stereotypes. arXiv.org. https://arxiv.org/abs/2110.01963.
Chandran, R., Smith, A., & Ramos, M. (2023, March 14). AI boom is dream and nightmare for workers in Global South. Context. https://www.context.news/ai/ai-boom-is-dream-and-nightmare-for-workers-in-global-south.
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for Datasets. Communications of the ACM, 64(12), 86–92. https://arxiv.org/abs/1803.09010.
Ghosh, S. & Caliskan, A. 2023. 'Person' == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6971–6985, Singapore. Association for Computational Linguistics. https://aclanthology.org/2023.findings-emnlp.465/.
Girrbach, L., Alaniz, S., Smith, G., & Akata, Z. (2025). A Large Scale Analysis of Gender Biases in Text-to-Image Generative Models. arXiv preprint arXiv:2503.23398. https://arxiv.org/abs/2503.23398
Hong, R., Agnew, W., Kohno, T., & Morgenstern, J. (2024). Who's in and who's out? A case study of multimodal CLIP-filtering in DataComp. In Proceedings of the 2024 ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO '24), 4:1–4:17. https://dl.acm.org/doi/10.1145/3689904.3694702
Lan, X., An, J., Guo, Y., Tong, C., Cai, X., & Zhang, J. (2025, April 7). Imagining the Far East: Exploring Perceived Biases in AI-Generated Images of East Asian Women. arXiv. https://arxiv.org/abs/2504.04865
Leu, W., Nakashima, Y., & Garcia, N. (2024). Auditing Image-based NSFW Classifiers for Content Filtering. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24) (pp. 1–11). https://facctconference.org/static/papers24/facct24-78.pdf.
Mannheimer, S., Rossmann, D., Clark, J., Shorish, Y., Bond, N., Scates Kettler, H., Sheehey, B., & Young, S. W. H. (2024, March 6). Introduction to the Special Issue: Responsible AI in Libraries and Archives. Journal of eScience Librarianship, 13(1), e860.1 https://doi.org/10.7191/jeslib.860.
Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019, January 14). Model cards for Model Reporting. arXiv.org. https://arxiv.org/abs/1810.03993.
Nicoletti, L., & Bass, D. (2023, Oct 18). Generative AI takes stereotypes and bias from bad to worse. Bloomberg News. https://www.bloomberg.com/graphics/2023-generative-ai-bias/.
Nwatu, J., Ignat, O., & Mihalcea, R. (2023, December 8). Biases in large image-text AI model favor wealthier, Western perspectives. University of Michigan News. https://news.umich.edu/biases-in-large-image-text-ai-model-favor-wealthier-western-perspectives/
Park, S. H., Koh, J. Y., Lee, J., Song, J., Kim, D., Moon, H., Lee, H., & Song, M. (2024). Illustrious: An open advanced illustration model. https://arxiv.org/html/2409.19946v1.
Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-Resolution Image Synthesis with Latent Diffusion Models. In Proceedings of CVPR 2022 (pp. 10684–10695). IEEE. https://github.com/CompVis/stable-diffusion/blob/main.
Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., Schramowski, P., Kundurthy, S., Crowson, K., Schmidt, L., Kaczmarczyk, R., & Jitsev, J. (2022, October 16). Laion-5B: An open large-scale dataset for training next generation image-text models. arXiv.org. https://arxiv.org/abs/2210.08402.
Solaiman, I., Talat, Z., Agnew, W., Ahmad, L., Baker, D., Blodgett, S. L., Hal, D. I., Dodge, J., Evans, E., Hooker, S., Jernite, Y., Luccioni, A. S., Lusoli, A., Mitchell, M., Newman, J., Png, M., Strait, A., & Vassilev, A. (2023). Evaluating the social impact of generative AI systems in systems and society. ResearchGate. https://www.researchgate.net/publication/371490141_Evaluating_the_Social_Impact_of_Generative_AI_Systems_in_Systems_and_Society
Sourojit Ghosh and Aylin Caliskan. 2023. ‘Person’ == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6971–6985, Singapore. Association for Computational Linguistics. https://aclanthology.org/2023.findings-emnlp.465/.
© 2025 E'Narda McCalister. All rights reserved.