Context not Content: A Novel Approach to Real-Time User-Generated Video Composition
Key: SWT+16-1
Author: Denny Stohr, Stefan Wilk, Iva Toteva, Wolfgang Effelsberg, Ralf Steinmetz
Date: December 2016
Kind: In proceedings
Abstract: Instant sharing of user-generated video recordings has become a widely used service on platforms such as YouNow. Yet, it still poses technical challenges, as mobile upload speed and capacities are limited. One proposed solution to address these issues is video composition. It allows switching between multiple video streams-selecting the best source for a given time-for composing a live video of a better overall quality for viewers. Previous approaches require visual analysis of the video streams, usually limiting the scalability of the system. In contrast, our work allows the stream selection to be realized solely on context information, based on video-and service-quality aspects from sensor and network measurements. The implemented monitoring service for context-aware upload of video streams is evaluated in varying network conditions, with diverse user behavior, including camera shaking and user mobility. We show that a higher efficiency for video upload as well as QoE for viewers can be achieved.
View Full paper (PDF) | Download Full paper (PDF)

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, not withstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.