A multi-temporal framework for high-level activity analysis: Violent event detection in visual surveillance
- Authors
 - Song, Donghui; Kim, Chansu; Park, Sung-Kee
 
- Issue Date
 - 2018-06
 
- Publisher
 - ELSEVIER SCIENCE INC
 
- Citation
 - INFORMATION SCIENCES, v.447, pp.83 - 103
 
- Abstract
 - This paper presents a novel framework for high-level activity analysis based on late fusion using multi-independent temporal perception layers. The method allows us to handle temporal diversity of high-level activities. The framework consists of multi-temporal analysis, multi-temporal perception layers, and late fusion. We build two types of perception layers based on situation graph trees (SGT) and support vector machines (SVMs). The results obtained from the multi-temporal perception layers are fused into an activity score through a step of late fusion. To verify this approach, we apply the framework to violent events detection in visual surveillance and experiments are conducted by using three datasets: BEHAVE, NUS-HGA and some videos from YouTube that show real situations. We also compare the proposed framework with existing single-temporal frameworks. The experiments produced results with accuracy of 0.783 (SGT-based, BEHAVE), 0.702 (SVM-based, BEHAVE), 0.872 (SGT-based, NUS-HGA), and 0.699 (SGT-based, YouTube), thereby showing that using our multi-temporal approach has advantages over single-temporal methods. (C) 2018 Elsevier Inc. All rights reserved.
 
- Keywords
 - LATE FUSION; RECOGNITION; BEHAVIOR; LATE FUSION; RECOGNITION; BEHAVIOR; Computer vision; Multi-temporal framework; High-level activity analysis; Violent event detection; Late fusion; Visual surveillance
 
- ISSN
 - 0020-0255
 
- URI
 - https://pubs.kist.re.kr/handle/201004/121323
 
- DOI
 - 10.1016/j.ins.2018.02.065
 
- Appears in Collections:
 - KIST Article > 2018
 
- Export
 - RIS (EndNote)
 - XLS (Excel)
 - XML
 
  
        
        Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.