Full metadata record

DC Field Value Language
dc.contributor.authorSon, Geonhui-
dc.contributor.authorLee, Jeong Ryong-
dc.contributor.authorHwang, Dosik-
dc.date.accessioned2026-02-26T09:30:14Z-
dc.date.available2026-02-26T09:30:14Z-
dc.date.created2026-02-26-
dc.date.issued2026-07-
dc.identifier.issn0893-6080-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/154381-
dc.description.abstractGenerative Adversarial Networks (GANs) have made significant progress in enhancing the quality of image synthesis. Recent methods frequently leverage pretrained networks to calculate perceptual losses or utilize pretrained feature spaces. In this paper, we extend the capabilities of pretrained networks by incorporating innovative self-supervised learning techniques and enforcing consistency between discriminators during GAN training. Our proposed method, named HP-GAN, effectively exploits neural network priors through two primary strategies: FakeTwins and discriminator consistency. FakeTwins leverages pretrained networks as encoders to compute a self-supervised loss and applies this through the generated images to train the generator, thereby enabling the generation of more diverse and high quality images. Additionally, we introduce a consistency mechanism between discriminators that evaluate feature maps extracted from Convolutional Neural Network (CNN) and Vision Transformer (ViT) feature networks. Discriminator consistency promotes coherent learning among discriminators and enhances training robustness by aligning their assessments of image quality. Our extensive evaluation across seventeen datasets-including scenarios with large, small, and limited data, and covering a variety of image domains-demonstrates that HP-GAN consistently outperforms current state-of-the-art methods in terms of Fr & eacute;chet Inception Distance (FID), achieving significant improvements in image diversity and quality. Code is available at: https://github.com/higun2/HP-GAN.-
dc.languageEnglish-
dc.publisherPergamon Press Ltd.-
dc.titleHP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency-
dc.typeArticle-
dc.identifier.doi10.1016/j.neunet.2026.108666-
dc.description.journalClass1-
dc.identifier.bibliographicCitationNeural Networks, v.199-
dc.citation.titleNeural Networks-
dc.citation.volume199-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid001683690000001-
dc.identifier.scopusid2-s2.0-105029054729-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryNeurosciences-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaNeurosciences & Neurology-
dc.type.docTypeArticle-
dc.subject.keywordAuthorImage generation-
dc.subject.keywordAuthorGenerative adversarial network-
dc.subject.keywordAuthorPretrained network-
dc.subject.keywordAuthorSelf-supervised learning-
Appears in Collections:
KIST Article > 2026
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE