Full metadata record

DC Field Value Language
dc.contributor.authorCho, Sungjae-
dc.contributor.authorLee, Soo-Young-
dc.date.accessioned2024-01-12T04:08:33Z-
dc.date.available2024-01-12T04:08:33Z-
dc.date.created2022-10-04-
dc.date.issued2021-
dc.identifier.issn2308-457X-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/77777-
dc.description.abstractWe present a methodology to train our multi-speaker emotional text-to-speech synthesizer that can express speech for 10 speakers' 7 different emotions. All silences from audio samples are removed prior to learning. This results in fast learning by our model. Curriculum learning is applied to train our model efficiently. Our model is first trained with a large single-speaker neutral dataset, and then trained with neutral speech from all speakers. Finally, our model is trained using datasets of emotional speech from all speakers. In each stage, training samples of each speaker-emotion pair have equal probability to appear in mini-batches. Through this procedure, our model can synthesize speech for all targeted speakers and emotions. Our synthesized audio sets are available on our web page.-
dc.languageEnglish-
dc.publisherISCA-INT SPEECH COMMUNICATION ASSOC-
dc.titleMulti-speaker Emotional Text-to-speech Synthesizer-
dc.typeConference-
dc.identifier.doi10.48550/arXiv.2112.03557-
dc.description.journalClass1-
dc.identifier.bibliographicCitationInterspeech Conference, pp.2337 - 2338-
dc.citation.titleInterspeech Conference-
dc.citation.startPage2337-
dc.citation.endPage2338-
dc.citation.conferencePlaceFR-
dc.citation.conferencePlaceBrno, CZECH REPUBLIC-
dc.citation.conferenceDate2021-08-30-
dc.relation.isPartOfINTERSPEECH 2021-
dc.identifier.wosid000841879502086-
Appears in Collections:
KIST Conference Paper > 2021
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE