Illustration Extraction From Video Streams
The Journal of Pattern Recognition Research (JPRR) provides an international forum for the electronic publication of high-quality research and industrial experience articles in all areas of pattern recognition, machine learning, and artificial intelligence. JPRR is committed to rigorous yet rapid reviewing. Final versions are published electronically
(ISSN 1558-884X) immediately upon acceptance.
Illustration Extraction From Video Streams
Ji-Wen Chio, Shu-Yuan Chen
JPRR Vol 7, No 1 (2012); doi:10.13176/11.151 
Download
Ji-Wen Chio, Shu-Yuan Chen
Abstract
Teachers usually illustrate major pedagogical concepts with graphics and/or images and/or tables and, in doing so, take a considerable amount of time in explanation. Despite the availability of many text extraction methods, they are limited such as when background is noisy, degraded, multicolored or containing graphics. This work presents an illustration extraction method for video streams to increasing learning efficiency among students. The proposed method separates the foreground from video streams. Above limitations of text extraction methods are solved by using contexture between image sequences in video streams and extracting illustrations from video streams. Two stages of the proposed method are training and extraction. Shot boundaries are detected to resample a video stream in a non-redundancy training set during the training phase. A background map together with a set of colors used frequently is then constructed based on the training set, subsequently used for illustration extraction. Next, a foreground map is generated according to the background map and frequent color set for each frame of the video stream during the extraction phase. Finally, illustrations are extracted by region labeling and geometric verification. Experimental results demonstrate the feasibility of the proposed method.
JPRR Vol 7, No 1 (2012); doi:10.13176/11.151 | Full Text  | Share this paper: