Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Aum, Sungmin | - |
dc.contributor.author | Choe, Seon | - |
dc.date.accessioned | 2024-01-19T13:32:59Z | - |
dc.date.available | 2024-01-19T13:32:59Z | - |
dc.date.created | 2022-04-05 | - |
dc.date.issued | 2021-10 | - |
dc.identifier.issn | 2046-4053 | - |
dc.identifier.uri | https://pubs.kist.re.kr/handle/201004/116274 | - |
dc.description.abstract | Background Systematic reviews (SRs) are recognized as reliable evidence, which enables evidence-based medicine to be applied to clinical practice. However, owing to the significant efforts required for an SR, its creation is time-consuming, which often leads to out-of-date results. To support SR tasks, tools for automating these SR tasks have been considered; however, applying a general natural language processing model to domain-specific articles and insufficient text data for training poses challenges. Methods The research objective is to automate the classification of included articles using the Bidirectional Encoder Representations from Transformers (BERT) algorithm. In particular, srBERT models based on the BERT algorithm are pre-trained using abstracts of articles from two types of datasets, and the resulting model is then fine-tuned using the article titles. The performances of our proposed models are compared with those of existing general machine-learning models. Results Our results indicate that the proposed srBERT(my) model, pre-trained with abstracts of articles and a generated vocabulary, achieved state-of-the-art performance in both classification and relation-extraction tasks; for the first task, it achieved an accuracy of 94.35% (89.38%), F1 score of 66.12 (78.64), and area under the receiver operating characteristic curve of 0.77 (0.9) on the original and (generated) datasets, respectively. In the second task, the model achieved an accuracy of 93.5% with a loss of 27%, thereby outperforming the other evaluated models, including the original BERT model. Conclusions Our research shows the possibility of automatic article classification using machine-learning approaches to support SR tasks and its broad applicability. However, because the performance of our model depends on the size and class ratio of the training dataset, it is important to secure a dataset of sufficient quality, which may pose challenges. | - |
dc.language | English | - |
dc.publisher | BioMed Central | - |
dc.title | srBERT: automatic article classification model for systematic review using BERT | - |
dc.type | Article | - |
dc.identifier.doi | 10.1186/s13643-021-01763-w | - |
dc.description.journalClass | 1 | - |
dc.identifier.bibliographicCitation | Systematic Reviews, v.10, no.1 | - |
dc.citation.title | Systematic Reviews | - |
dc.citation.volume | 10 | - |
dc.citation.number | 1 | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.identifier.wosid | 000712962200003 | - |
dc.identifier.scopusid | 2-s2.0-85118421200 | - |
dc.relation.journalWebOfScienceCategory | Medicine, General & Internal | - |
dc.relation.journalResearchArea | General & Internal Medicine | - |
dc.type.docType | Review | - |
dc.subject.keywordAuthor | Systematic review | - |
dc.subject.keywordAuthor | Process automation | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Text mining | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.