Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Son, Geonhui | - |
| dc.contributor.author | Lee, Jeong Ryong | - |
| dc.contributor.author | Hwang, Dosik | - |
| dc.date.accessioned | 2026-02-26T09:30:14Z | - |
| dc.date.available | 2026-02-26T09:30:14Z | - |
| dc.date.created | 2026-02-26 | - |
| dc.date.issued | 2026-07 | - |
| dc.identifier.issn | 0893-6080 | - |
| dc.identifier.uri | https://pubs.kist.re.kr/handle/201004/154381 | - |
| dc.description.abstract | Generative Adversarial Networks (GANs) have made significant progress in enhancing the quality of image synthesis. Recent methods frequently leverage pretrained networks to calculate perceptual losses or utilize pretrained feature spaces. In this paper, we extend the capabilities of pretrained networks by incorporating innovative self-supervised learning techniques and enforcing consistency between discriminators during GAN training. Our proposed method, named HP-GAN, effectively exploits neural network priors through two primary strategies: FakeTwins and discriminator consistency. FakeTwins leverages pretrained networks as encoders to compute a self-supervised loss and applies this through the generated images to train the generator, thereby enabling the generation of more diverse and high quality images. Additionally, we introduce a consistency mechanism between discriminators that evaluate feature maps extracted from Convolutional Neural Network (CNN) and Vision Transformer (ViT) feature networks. Discriminator consistency promotes coherent learning among discriminators and enhances training robustness by aligning their assessments of image quality. Our extensive evaluation across seventeen datasets-including scenarios with large, small, and limited data, and covering a variety of image domains-demonstrates that HP-GAN consistently outperforms current state-of-the-art methods in terms of Fr & eacute;chet Inception Distance (FID), achieving significant improvements in image diversity and quality. Code is available at: https://github.com/higun2/HP-GAN. | - |
| dc.language | English | - |
| dc.publisher | Pergamon Press Ltd. | - |
| dc.title | HP-GAN: Harnessing pretrained networks for GAN improvement with FakeTwins and discriminator consistency | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1016/j.neunet.2026.108666 | - |
| dc.description.journalClass | 1 | - |
| dc.identifier.bibliographicCitation | Neural Networks, v.199 | - |
| dc.citation.title | Neural Networks | - |
| dc.citation.volume | 199 | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.identifier.wosid | 001683690000001 | - |
| dc.identifier.scopusid | 2-s2.0-105029054729 | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Neurosciences | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Neurosciences & Neurology | - |
| dc.type.docType | Article | - |
| dc.subject.keywordAuthor | Image generation | - |
| dc.subject.keywordAuthor | Generative adversarial network | - |
| dc.subject.keywordAuthor | Pretrained network | - |
| dc.subject.keywordAuthor | Self-supervised learning | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.