Full metadata record

DC Field Value Language
dc.contributor.authorLee, Taigun-
dc.contributor.authorPark, Sung-Kee-
dc.contributor.authorPark, Mignon-
dc.date.accessioned2024-01-21T02:03:42Z-
dc.date.available2024-01-21T02:03:42Z-
dc.date.created2021-09-01-
dc.date.issued2006-11-03-
dc.identifier.issn0020-0255-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/134965-
dc.description.abstractIn this paper, an effective method of facial features detection is proposed for human-robot interaction (HRI). Considering the mobility of mobile robot, it is inevitable that any vision system for a mobile robot is bound to be faced with various imaging conditions such as pose variations, illumination changes, and cluttered backgrounds. To detecting face correctly under such difficult conditions, we focus on the local intensity pattern of the facial features. The characteristics of relatively dark and directionally different pattern can provide robust clues for detecting facial features. Based on this observation, we suggest a new directional template for detecting the major facial features, namely the two eyes and the mouth. By applying this template to a facial image, we can make a new convolved image, which we refer to as the edge-like blob map. One distinctive characteristic of this map image is that it provides the local and directional convolution values for each image pixel, which makes it easier to construct the candidate blobs of the major facial features without the information of facial boundary. Then, these candidates are filtered using the conditions associated with the spatial relationship of the two eyes and the mouth, and the face detection process is completed by applying appearance-based facial templates to the refined facial features. The overall detection results obtained with various color images and gray-level face database images demonstrate the usefulness of the proposed method in HRI applications. (C) 2005 Elsevier Inc. All rights reserved.-
dc.languageEnglish-
dc.publisherELSEVIER SCIENCE INC-
dc.subjectAUTOMATIC EXTRACTION-
dc.subjectRECOGNITION-
dc.subjectEYE-
dc.titleAn effective method for detecting facial features and face in human-robot interaction-
dc.typeArticle-
dc.identifier.doi10.1016/j.ins.2005.12.009-
dc.description.journalClass1-
dc.identifier.bibliographicCitationINFORMATION SCIENCES, v.176, no.21, pp.3166 - 3189-
dc.citation.titleINFORMATION SCIENCES-
dc.citation.volume176-
dc.citation.number21-
dc.citation.startPage3166-
dc.citation.endPage3189-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid000240378500004-
dc.identifier.scopusid2-s2.0-33746846521-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalResearchAreaComputer Science-
dc.type.docTypeArticle-
dc.subject.keywordPlusAUTOMATIC EXTRACTION-
dc.subject.keywordPlusRECOGNITION-
dc.subject.keywordPlusEYE-
dc.subject.keywordAuthorfacial features and face detection-
dc.subject.keywordAuthorhuman-robot interaction-
dc.subject.keywordAuthoredge-like blob map-
dc.subject.keywordAuthordirectional template-
dc.subject.keywordAuthoreye-pairs-
Appears in Collections:
KIST Article > 2006
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE