Full metadata record

DC Field Value Language
dc.contributor.authorPark, Soonyong-
dc.contributor.authorKim, Soohwan-
dc.contributor.authorPark, Mignon-
dc.contributor.authorPark, Sung-Kee-
dc.date.accessioned2024-01-20T20:04:03Z-
dc.date.available2024-01-20T20:04:03Z-
dc.date.created2021-09-05-
dc.date.issued2009-12-15-
dc.identifier.issn0020-0255-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/131879-
dc.description.abstractThis paper presents a novel vision-based global localization that uses hybrid maps of objects and spatial layouts. We model indoor environments with a stereo camera using the following visual cues: local invariant features for object recognition and their 3D positions for object pose estimation. We also use the depth information at the horizontal centerline of image where the optical axis passes through, which is similar to the data from a 2D laser range finder. This allows us to build our topological node that is composed of a horizontal depth map and an object location map. The horizontal depth map describes the explicit spatial layout of each local space and provides metric information to compute the spatial relationships between adjacent spaces, while the object location map contains the pose information of objects found in each local space and the visual features for object recognition. Based on this map representation, we suggest a coarse-to-fine strategy for global localization. The coarse pose is estimated by means of object recognition and SVD-based point cloud fitting, and then is refined by stochastic scan matching. Experimental results show that our approaches can be used for an effective vision-based map representation as well as for global localization methods. (C) 2009 Published by Elsevier Inc.-
dc.languageEnglish-
dc.publisherELSEVIER SCIENCE INC-
dc.subjectCOGNITIVE MAPS-
dc.subjectREPRESENTATION-
dc.titleVision-based global localization for mobile robots with hybrid maps of objects and spatial layouts-
dc.typeArticle-
dc.identifier.doi10.1016/j.ins.2009.06.030-
dc.description.journalClass1-
dc.identifier.bibliographicCitationINFORMATION SCIENCES, v.179, no.24, pp.4174 - 4198-
dc.citation.titleINFORMATION SCIENCES-
dc.citation.volume179-
dc.citation.number24-
dc.citation.startPage4174-
dc.citation.endPage4198-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid000271562000006-
dc.identifier.scopusid2-s2.0-70349739574-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalResearchAreaComputer Science-
dc.type.docTypeArticle-
dc.subject.keywordPlusCOGNITIVE MAPS-
dc.subject.keywordPlusREPRESENTATION-
dc.subject.keywordAuthorHybrid map-
dc.subject.keywordAuthorGlobal localization-
dc.subject.keywordAuthorMobile robot-
dc.subject.keywordAuthorStereo vision-
dc.subject.keywordAuthorObject recognition-
Appears in Collections:
KIST Article > 2009
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE