Full metadata record

DC Field Value Language
dc.contributor.authorYerin Choi-
dc.contributor.authorSeungyeon Ji-
dc.contributor.authorJiwon Kim-
dc.contributor.authorMi Lim Cheon-
dc.contributor.authorHong Kyu Kim-
dc.contributor.authorKyungjoon Oh-
dc.contributor.authorSoyeon Caren Han-
dc.contributor.authorSangwook Yi-
dc.contributor.authorHan, Kyung reem-
dc.date.accessioned2025-12-30T08:30:04Z-
dc.date.available2025-12-30T08:30:04Z-
dc.date.created2025-12-02-
dc.date.issued2025-11-13-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/153933-
dc.identifier.urihttps://openreview.net/pdf?id=9jla4qgmTP-
dc.description.abstractText-to-Image (T2I) models have advanced rapidly, capable of generating high-quality images from natural language prompts; yet, T2I outputs often expose social biases—especially concerning demographic lines such as occupation and race. This certainly raises concerns about the fairness and trustworthiness of T2I. While current evaluations mainly rely on statistical disparity measures, they often overlook the connection to social acceptance and normative expectations. To create a socially grounded framework, we introduce SocialBiasKG (human perception), a structured knowledge graph that captures social nuances in occupation–race bias, including global taxonomy-based directed edges—Stereotype, Association, Dominance, and Underrepresentation. We develop (1) a comprehensive bias evaluation dataset and (2) a detailed protocol customized for each edge type and direction. The evaluation metrics include style similarity, representational bias, and image quality, which are applied to ModelBiasKG (model outputs). This allows for systematic comparisons across models and against human-annotated SocialBiasKG, revealing whether T2I models reproduce, distort, or diverge from cultural norms. We demonstrate that our KG-based framework effectively detects nuanced, socially important biases and highlights key gaps between human perceptions and model behavior. Our approach offers a socially grounded, interpretable, and extendable method for evaluating bias in generative vision models.-
dc.languageEnglish-
dc.publisherKR-
dc.titleKnowledge Graphical Representation and Evaluation of Social Perception and Bias in Text-to-Image Models-
dc.typeConference-
dc.description.journalClass1-
dc.identifier.bibliographicCitation22nd International Conference on Principles of Knowledge Representation and Reasoning, v.1-
dc.citation.title22nd International Conference on Principles of Knowledge Representation and Reasoning-
dc.citation.volume1-
dc.citation.conferencePlaceAT-
dc.citation.conferencePlaceMelbourne, Australia-
dc.citation.conferenceDate2025-11-
dc.relation.isPartOfFirst International Workshop on LLMs and KRR for Trustworthy AI-

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE