Illustration Extraction From Video Streams
Ji-Wen Chio, Shu-Yuan Chen
Abstract
Teachers usually illustrate major pedagogical concepts with graphics and/or images and/or tables and, in doing so, take a considerable amount of time in explanation. Despite the availability of many text extraction methods, they are limited such as when background is noisy, degraded, multicolored or containing graphics. This work presents an illustration extraction method for video streams to increasing learning efficiency among students. The proposed method separates the foreground from video streams. Above limitations of text extraction methods are solved by using contexture between image sequences in video streams and extracting illustrations from video streams. Two stages of the proposed method are training and extraction. Shot boundaries are detected to resample a video stream in a non-redundancy training set during the training phase. A background map together with a set of colors used frequently is then constructed based on the training set, subsequently used for illustration extraction. Next, a foreground map is generated according to the background map and frequent color set for each frame of the video stream during the extraction phase. Finally, illustrations are extracted by region labeling and geometric verification. Experimental results demonstrate the feasibility of the proposed method.