Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Park, Sanghun | - |
dc.contributor.author | Shim, Jaegyu | - |
dc.contributor.author | Yoon, Nakyung | - |
dc.contributor.author | Lee, Sungman | - |
dc.contributor.author | Kwak, Donggeun | - |
dc.contributor.author | Lee, Seungyong | - |
dc.contributor.author | Kim, Young Mo | - |
dc.contributor.author | Son, Moon | - |
dc.contributor.author | Cho, Kyung Hwa | - |
dc.date.accessioned | 2024-01-19T10:33:29Z | - |
dc.date.available | 2024-01-19T10:33:29Z | - |
dc.date.created | 2022-10-20 | - |
dc.date.issued | 2022-12 | - |
dc.identifier.issn | 0045-6535 | - |
dc.identifier.uri | https://pubs.kist.re.kr/handle/201004/114257 | - |
dc.description.abstract | Enhancing engineering efficiency and reducing operating costs are permanent subjects that face all engineers over the world. To effectively improve the performance of filtration systems, it is necessary to determine an optimal operating condition beyond conventional methods of periodic and empirical operation. Herein, this paper proposes an effective approach to finding an optimal operating strategy using deep reinforcement learning (DRL), particularly for an ultrafiltration (UF) system. Deep learning was developed to represent the UF system utilizing a long-short term memory and provided an environment for DRL. DRL was designed to control three actions; operating pressure, cleaning time, and cleaning concentration. Ultimately, DRL proposed the UF system to actively change the operating pressure and cleaning conditions over time toward better water productivity and operating efficiency. DRL denoted similar to 20.9% of specific energy consumption can be reduced by increasing average water flux (39.5-43.7 L m(-2) h(-1)) and reducing operating pressure (0.617-0.540 bar). Moreover, the optimal action of DRL was reasonable to achieve better performance beyond the conventional operation. Crucially, this study demonstrated that due to the nature of DRL, the approach is tractable for engineering systems that have structurally complex relationships among operating conditions and resultants. | - |
dc.language | English | - |
dc.publisher | Pergamon Press Ltd. | - |
dc.title | Deep reinforcement learning in an ultrafiltration system: Optimizing operating pressure and chemical cleaning conditions | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.chemosphere.2022.136364 | - |
dc.description.journalClass | 1 | - |
dc.identifier.bibliographicCitation | Chemosphere, v.308 | - |
dc.citation.title | Chemosphere | - |
dc.citation.volume | 308 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.identifier.wosid | 000864635900003 | - |
dc.relation.journalWebOfScienceCategory | Environmental Sciences | - |
dc.relation.journalResearchArea | Environmental Sciences & Ecology | - |
dc.type.docType | Article | - |
dc.subject.keywordPlus | WATER | - |
dc.subject.keywordAuthor | Deep reinforcement learning | - |
dc.subject.keywordAuthor | Machine learning | - |
dc.subject.keywordAuthor | Ultrafiltration | - |
dc.subject.keywordAuthor | Chemical cleaning | - |
dc.subject.keywordAuthor | Optimization | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.