- Browse by Author
Browsing by Author "Cai, Hongyuan"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Digesting omni-video along routes for navigation(Office of the Vice Chancellor for Research, 2011-04-08) Cai, Hongyuan; Zheng, Jiang YuOmni-directional video records complete visual information along a route. Though replaying an omni-video presents reality, it requires significant amount of memory and communication bandwidth. This work extracts distinct views from an omni-video to form a visual digest named route sheet for navigation. We sort scenes at the motion and visibility level and investigate the similarity/redundancy of scenes in the context of a route. We use source data from 3D elevation map or omni-videos for the view selection. By condensing the flow in the video, our algorithm can generate distinct omni-view sequences with visual information as rich as the omni-video for further scene indexing and navigation with GIS data.Item Key Views for Visualizing Large Spaces(Elsevier, 2009-08) Cai, Hongyuan; Zheng, Jiang Yu; Fang, Shiaofen; Tuceryan, MihranImage is a dominant medium among video, 3D model, and other media for visualizing environment and creating virtual access on the Internet. The location of image capture is, however, subjective and has relied on the esthetic sense of photographers up until this point. In this paper, we will not only visualize areas with images, but also propose a general framework to determine where the most distinct viewpoints should be located. Starting from elevation data, we present spatial and content information in ground-based images such that (1) a given number of images can have maximum coverage on informative scenes; (2) a set of key views can be selected with certain continuity for representing the most distinct views. According to the scene visibility, continuity, and data redundancy, we evaluate viewpoints numerically with an object-emitting illumination model. Our key view exploration may eventually reduce the visual data to transmit, facilitate image acquisition, indexing and interaction, and enhance perception of spaces. Real sample images are captured based on planned positions to form a visual network to index the area.Item Video anatomy : spatial-temporal video profile(2014-07-31) Cai, Hongyuan; Zheng, Jiang Yu; Tuceryan, Mihran; Popescu, Voicu Sebastian; Tricoche, Xavier; Prabhakar, Sunil; Gorman, William J.A massive amount of videos are uploaded on video websites, smooth video browsing, editing, retrieval, and summarization are demanded. Most of the videos employ several types of camera operations for expanding field of view, emphasizing events, and expressing cinematic effect. To digest heterogeneous videos in video websites and databases, video clips are profiled to 2D image scroll containing both spatial and temporal information for video preview. The video profile is visually continuous, compact, scalable, and indexing to each frame. This work analyzes the camera kinematics including zoom, translation, and rotation, and categorize camera actions as their combinations. An automatic video summarization framework is proposed and developed. After conventional video clip segmentation and video segmentation for smooth camera operations, the global flow field under all camera actions has been investigated for profiling various types of video. A new algorithm has been designed to extract the major flow direction and convergence factor using condensed images. Then this work proposes a uniform scheme to segment video clips and sections, sample video volume across the major flow, compute flow convergence factor, in order to obtain an intrinsic scene space less influenced by the camera ego-motion. The motion blur technique has also been used to render dynamic targets in the profile. The resulting profile of video can be displayed in a video track to guide the access to video frames, help video editing, and facilitate the applications such as surveillance, visual archiving of environment, video retrieval, and online video preview.