Full metadata record

DC Field Value Language
dc.contributor.authorAnh-Duc Nguyen-
dc.contributor.authorKim, Jongyoo-
dc.contributor.authorOh, Heeseok-
dc.contributor.authorKim, Haksub-
dc.contributor.authorLin, Weisi-
dc.contributor.authorLee, Sanghoon-
dc.date.accessioned2024-01-19T20:31:23Z-
dc.date.available2024-01-19T20:31:23Z-
dc.date.created2021-09-02-
dc.date.issued2019-04-
dc.identifier.issn1057-7149-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/120177-
dc.description.abstractVisual saliency on stereoscopic 3D (S3D) images has been shown to be heavily influenced by image quality. Hence, this dependency is an important factor in image quality prediction, image restoration and discomfort reduction, but it is still very difficult to predict such a nonlinear relation in images. In addition, most algorithms specialized in detecting visual saliency on pristine images may unsurprisingly fail when facing distorted images. In this paper, we investigate a deep learning scheme named Deep Visual Saliency (DeepVS) to achieve a more accurate and reliable saliency predictor even in the presence of distortions. Since visual saliency is influenced by low-level features (contrast, luminance, and depth information) from a psychophysical point of view, we propose seven low-level features derived from S3D image pairs and utilize them in the context of deep learning to detect visual attention adaptively to human perception. During analysis, it turns out that the low-level features play a role to extract distortion and saliency information. To construct saliency predictors, we weight and model the human visual saliency through two different network architectures, a regression and a fully convolutional neural networks. Our results from thorough experiments confirm that the predicted saliency maps are up to 70% correlated with human gaze patterns, which emphasize the need for the hand-crafted features as input to deep neural networks in S3D saliency detection.-
dc.languageEnglish-
dc.publisherIEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC-
dc.subjectPREDICTION-
dc.subjectATTENTION-
dc.subjectQUALITY-
dc.subjectMODEL-
dc.titleDeep Visual Saliency on Stereoscopic Images-
dc.typeArticle-
dc.identifier.doi10.1109/TIP.2018.2879408-
dc.description.journalClass1-
dc.identifier.bibliographicCitationIEEE TRANSACTIONS ON IMAGE PROCESSING, v.28, no.4, pp.1939 - 1953-
dc.citation.titleIEEE TRANSACTIONS ON IMAGE PROCESSING-
dc.citation.volume28-
dc.citation.number4-
dc.citation.startPage1939-
dc.citation.endPage1953-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid000453552100004-
dc.identifier.scopusid2-s2.0-85056147202-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.type.docTypeArticle-
dc.subject.keywordPlusPREDICTION-
dc.subject.keywordPlusATTENTION-
dc.subject.keywordPlusQUALITY-
dc.subject.keywordPlusMODEL-
dc.subject.keywordAuthorSaliency prediction-
dc.subject.keywordAuthorstereoscopic image-
dc.subject.keywordAuthordistorted image-
dc.subject.keywordAuthorconvolutional neural network-
dc.subject.keywordAuthordeep learning-
Appears in Collections:
KIST Article > 2019
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE