If I Want to Mitigate Harmful Bias, How Can I Help?
If you're an information professional, you are uniquely qualified to mitigate harmful bias in AI. In fact, if you love to share knowledge and live for those moments when you get to see the lights shine brightly in someone's mind, you already have the necessary skills. You know how to organize knowlege, you know how to develop collections, and you spend every day thinking about challenges to information literacy. Ethical data stewardship, user-centered design, and metadata management are in your DNA.
These skills translate directly to AI ethics work. Your expertise in evaluating sources helps identify problematic datasets before they train models. Your commitment to diverse representation informs more inclusive data collection practices. Your reference skills make you an ideal translator between technical teams and communities affected by AI.
You understand that context matters, and that a book shelved in fiction sends a different message than the same book in non-fiction. Your awareness helps you recognize when AI systems miscategorize or misrepresent information in harmful ways. The future needs this expertise now more than ever.

Build your own datasets. Try curating small, intentional image or text datasets that foreground underrepresented voices. Pair them with transparent, standardized dataset documentation (Datasheets for Datasets, Model Cards).
Understand how AI is built. Demystify the pipelines behind generative models. Learn how data is scraped, labeled, filtered, and trained. Knowing the mechanics helps you explain AI to patrons, colleagues, and impacted communities in plain terms.
Learn to recognize bias in AI systems. Study how models reinforce stereotypes through image generation, tagging, and ranking. Explore real-world examples of algorithmic harm and pay attention to who gets erased, misrepresented, or overrepresented.
There are several specific approaches you can take:
Implement community-informed curation practices. Involve subject experts, cultural stakeholders, and frontline workers in selecting and describing data. Even with the best intentions, you can’t correct a lack of representation by relying on an even narrower point of view. Without shared authority, attempts to fix bias often reinforce it.
Run your own audits. Pick a dataset your institution hosts (or a tool it uses) and conduct a prompt-based bias test. Document and share what you find. Propose evaluation criteria for vendors. Use your findings to influence future procurement and digital strategy.
Embed AI and data literacy education within information literacy programs. Include how AI shapes search, recommendation, and generation to help students question how images and texts are produced—and who gets left out.
Use LIS frameworks to guide responsible AI work. Align your institutional practices with guidelines from the Journal of eScience Librarianship and related scholarly frameworks on responsible AI stewardship.
© 2025 E'Narda McCalister. All rights reserved.