Full metadata record

DC Field Value Language
dc.contributor.authorLee, TS-
dc.contributor.authorPark, M-
dc.contributor.authorKim, TS-
dc.date.accessioned2024-01-21T04:36:22Z-
dc.date.available2024-01-21T04:36:22Z-
dc.date.created2021-09-05-
dc.date.issued2005-08-
dc.identifier.issn0302-9743-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/136250-
dc.description.abstractAutonomic machines interacting with human should have capability to perceive the states of emotion and attitude through implicit messages for obtaining voluntary cooperation from their clients. Voice is the easiest and the most natural way to exchange human messages. The automatic systems capable of understanding the states of emotion and attitude have utilized features based on pitch and energy of uttered sentences. Performance of the existing emotion recognition systems can be further improved with the support of linguistic knowledge that specific tonal section in a sentence is related to the states of emotion and attitude. In this paper, we attempt to improve the recognition rate of emotion by adopting such linguistic knowledge for Korean ending boundary tones into an automatic system implemented using pitch-related features and multilayer perceptrons. From the results of an experiment over a Korean emotional speech database, a substantial improvement is confirmed.-
dc.languageEnglish-
dc.publisherSPRINGER-VERLAG BERLIN-
dc.titleToward more reliable emotion recognition of vocal sentences by emphasizing information of Korean ending boundary tones-
dc.typeArticle-
dc.description.journalClass1-
dc.identifier.bibliographicCitationROUGH SETS, FUZZY SETS, DATA MINING, AND GRANULAR COMPUTING, PT 2, PROCEEDINGS, v.3642, pp.304 - 313-
dc.citation.titleROUGH SETS, FUZZY SETS, DATA MINING, AND GRANULAR COMPUTING, PT 2, PROCEEDINGS-
dc.citation.volume3642-
dc.citation.startPage304-
dc.citation.endPage313-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid000232190100032-
dc.identifier.scopusid2-s2.0-33645992467-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalResearchAreaComputer Science-
dc.type.docTypeArticle; Proceedings Paper-
dc.subject.keywordAuthorEmotion recognition-
dc.subject.keywordAuthorVoice recognition-
dc.subject.keywordAuthorVocal sentence-
Appears in Collections:
KIST Article > 2005
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE