Appiah, Kofi, Hunter, Andrew, Meng, Fanyi et al and Dickinson, Patrick
(2010)
Accelerated hardware video object segmentation: From foreground detection to connected components labelling.
Computer vision and image understanding, 114
(11).
pp. 1282-1291.
ISSN 1077-3142
Full content URL: http://dx.doi.org/10.1016/j.cviu.2010.03.021
Appiah2010CVIUAcceleratedHardwareObjectExtraction.pdf | | ![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png) [Download] |
|
![[img]](http://eprints.lincoln.ac.uk/style/images/fileicons/application_pdf.png)  Preview |
|
PDF
Appiah2010CVIUAcceleratedHardwareObjectExtraction.pdf
- Whole Document
593kB |
Item Type: | Article |
---|
Item Status: | Live Archive |
---|
Abstract
This paper demonstrates the use of a single-chip FPGA for the segmentation of moving objects in a video sequence. The system maintains highly accurate background models, and integrates the detection of foreground pixels with the labelling of objects using a connected components algorithm. The background models are based on 24-bit RGB values and 8-bit gray scale intensity values. A multimodal background differencing algorithm is presented, using a single FPGA chip and four blocks of RAM. The real-time connected component labelling algorithm, also designed for FPGA implementation, run-length encodes the output of the background subtraction, and performs connected component analysis on this representation. The run-length encoding, together with other parts of the algorithm, is performed in parallel; sequential operations are minimized as the number of run-lengths are typically less than the number of pixels. The two algorithms are pipelined together for maximum efficiency.
Repository Staff Only: item control page