Full metadata record

DC Field Value Language
dc.contributor.authorAum, Sungmin-
dc.contributor.authorChoe, Seon-
dc.date.accessioned2024-01-19T13:32:59Z-
dc.date.available2024-01-19T13:32:59Z-
dc.date.created2022-04-05-
dc.date.issued2021-10-
dc.identifier.issn2046-4053-
dc.identifier.urihttps://pubs.kist.re.kr/handle/201004/116274-
dc.description.abstractBackground Systematic reviews (SRs) are recognized as reliable evidence, which enables evidence-based medicine to be applied to clinical practice. However, owing to the significant efforts required for an SR, its creation is time-consuming, which often leads to out-of-date results. To support SR tasks, tools for automating these SR tasks have been considered; however, applying a general natural language processing model to domain-specific articles and insufficient text data for training poses challenges. Methods The research objective is to automate the classification of included articles using the Bidirectional Encoder Representations from Transformers (BERT) algorithm. In particular, srBERT models based on the BERT algorithm are pre-trained using abstracts of articles from two types of datasets, and the resulting model is then fine-tuned using the article titles. The performances of our proposed models are compared with those of existing general machine-learning models. Results Our results indicate that the proposed srBERT(my) model, pre-trained with abstracts of articles and a generated vocabulary, achieved state-of-the-art performance in both classification and relation-extraction tasks; for the first task, it achieved an accuracy of 94.35% (89.38%), F1 score of 66.12 (78.64), and area under the receiver operating characteristic curve of 0.77 (0.9) on the original and (generated) datasets, respectively. In the second task, the model achieved an accuracy of 93.5% with a loss of 27%, thereby outperforming the other evaluated models, including the original BERT model. Conclusions Our research shows the possibility of automatic article classification using machine-learning approaches to support SR tasks and its broad applicability. However, because the performance of our model depends on the size and class ratio of the training dataset, it is important to secure a dataset of sufficient quality, which may pose challenges.-
dc.languageEnglish-
dc.publisherBioMed Central-
dc.titlesrBERT: automatic article classification model for systematic review using BERT-
dc.typeArticle-
dc.identifier.doi10.1186/s13643-021-01763-w-
dc.description.journalClass1-
dc.identifier.bibliographicCitationSystematic Reviews, v.10, no.1-
dc.citation.titleSystematic Reviews-
dc.citation.volume10-
dc.citation.number1-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.identifier.wosid000712962200003-
dc.identifier.scopusid2-s2.0-85118421200-
dc.relation.journalWebOfScienceCategoryMedicine, General & Internal-
dc.relation.journalResearchAreaGeneral & Internal Medicine-
dc.type.docTypeReview-
dc.subject.keywordAuthorSystematic review-
dc.subject.keywordAuthorProcess automation-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorText mining-
Appears in Collections:
KIST Article > 2021
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML

qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE