Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Yerin Choi | - |
| dc.contributor.author | Seungyeon Ji | - |
| dc.contributor.author | Jiwon Kim | - |
| dc.contributor.author | Mi Lim Cheon | - |
| dc.contributor.author | Hong Kyu Kim | - |
| dc.contributor.author | Kyungjoon Oh | - |
| dc.contributor.author | Soyeon Caren Han | - |
| dc.contributor.author | Sangwook Yi | - |
| dc.contributor.author | Han, Kyung reem | - |
| dc.date.accessioned | 2025-12-30T08:30:04Z | - |
| dc.date.available | 2025-12-30T08:30:04Z | - |
| dc.date.created | 2025-12-02 | - |
| dc.date.issued | 2025-11-13 | - |
| dc.identifier.uri | https://pubs.kist.re.kr/handle/201004/153933 | - |
| dc.identifier.uri | https://openreview.net/pdf?id=9jla4qgmTP | - |
| dc.description.abstract | Text-to-Image (T2I) models have advanced rapidly, capable of generating high-quality images from natural language prompts; yet, T2I outputs often expose social biases—especially concerning demographic lines such as occupation and race. This certainly raises concerns about the fairness and trustworthiness of T2I. While current evaluations mainly rely on statistical disparity measures, they often overlook the connection to social acceptance and normative expectations. To create a socially grounded framework, we introduce SocialBiasKG (human perception), a structured knowledge graph that captures social nuances in occupation–race bias, including global taxonomy-based directed edges—Stereotype, Association, Dominance, and Underrepresentation. We develop (1) a comprehensive bias evaluation dataset and (2) a detailed protocol customized for each edge type and direction. The evaluation metrics include style similarity, representational bias, and image quality, which are applied to ModelBiasKG (model outputs). This allows for systematic comparisons across models and against human-annotated SocialBiasKG, revealing whether T2I models reproduce, distort, or diverge from cultural norms. We demonstrate that our KG-based framework effectively detects nuanced, socially important biases and highlights key gaps between human perceptions and model behavior. Our approach offers a socially grounded, interpretable, and extendable method for evaluating bias in generative vision models. | - |
| dc.language | English | - |
| dc.publisher | KR | - |
| dc.title | Knowledge Graphical Representation and Evaluation of Social Perception and Bias in Text-to-Image Models | - |
| dc.type | Conference | - |
| dc.description.journalClass | 1 | - |
| dc.identifier.bibliographicCitation | 22nd International Conference on Principles of Knowledge Representation and Reasoning, v.1 | - |
| dc.citation.title | 22nd International Conference on Principles of Knowledge Representation and Reasoning | - |
| dc.citation.volume | 1 | - |
| dc.citation.conferencePlace | AT | - |
| dc.citation.conferencePlace | Melbourne, Australia | - |
| dc.citation.conferenceDate | 2025-11 | - |
| dc.relation.isPartOf | First International Workshop on LLMs and KRR for Trustworthy AI | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.