ACM SIGCOMM 2017 Workshop on Virtual Reality and Augmented Reality Network (VR/AR Network 2017)
Workshop Program
- Opening Remarks
-
Keynote: Keynote Talk: AR/VR and the Future of Networked Intelligence
Dr. Tish Shute, Director, AR/VR, Corporate Strategy and Technology Planning, Huawei, USA.
Abstract: This talk will look at how, given a major shift in computing from “Mobile First” to “AI First,” we need to re-examine our current understandings of AR/VR. I will explore how AR/VR and experiential communications will redefine the relationship between human and machine intelligence. This new, intimate connection between AR/VR and AI will be key to identifying future network architectures as we begin an era in which the augmentation of humans will not be so much about upgrading our external tools as merging with them.
Bio: Tish Shute is Director, AR/VR, Corporate Strategy and Technology Planning, Huawei, USA. Previously Tish worked with Will Wright (Sims, Sim City) to create a new generation of mobile social experiences based on neural nets and innovative approaches to machine learning. At Stupid Fun Club Tish worked with Will on next generation entertainment - smart toys, augmented reality television and games. Tish has taken a leading role in the emergence of augmented and virtual reality into the consumer market. She is co-founder of Augmented Reality.ORG, a global not-for-profit organization dedicated to advancing augmented and virtual reality (AR and VR). She also co-founded Augmented Reality Event (ARE) and Augmented World Expo (AWE) which is now in its eighth year and the world’s largest event focused on AR and VR. Tish is a recognized speaker in the AR and VR industry and sought-after advisor for augmented and virtual reality initiatives. Tish began her career in design and technology doing visual effects for film, television, theme parks and aerospace. Tish’s first company NPlus1 pioneered the use of automation and robotics in film making and entertainment experiences. Tish has an MFA in Combined Media, Hunter College, NY, and an MPhil. (Ph.D. ABD) in Culture and Media from New York University.
- Coffee Break
- Session 1
-
Characterization of 360-degree Videos
Shahryar Afzal, Jiasi Chen, and K. K. Ramakrishnan (University of California, Riverside)
-
Abstract:
Online streaming of Virtual Reality and 360-degree videos is rapidly growing, as more and more major content providers and news outlets adopt the format to enrich the user experience. We characterize 360-degree videos by examining several thousand YouTube videos across more than a dozen categories. 360-degree videos, at first sight, seem to pose a challenge for the network to stream because of their substantially higher bit rates and larger number of resolutions. However, a careful examination of video characteristics reveals that there are significant cant opportunities for reducing the actual bit rate delivered to client devices based on the user’s field of view. We study the bit rate and the motion in 360-degree videos, and compare them against regular videos by investigating several important metrics. We find that 360-degree videos are less variable in terms of bit rate, and have less motion than regular videos. Our expectation is that variability in the bit rates due to the motion of the camera in regular videos (or switching between cameras) is now translated to responsiveness requirements for end-to-end 360-degree streaming architectures.
-
-
VR Video Conferencing over Named Data Networks
Liyang Zhang, Syed Obaid Amin, and Cedric Westphal (Huawei Research Center)
-
Abstract:
We propose a VR video conferencing system over named data networks (NDN). The system is designed to support real-time, multi-party streaming and playback of 360 degree video on a web player. A centralized architecture is used, with a signaling server to coordinate multiple participants. To ensure real-time requirement, a protocol featuring prefetching is used for producer-consumer communication. Along with the native support of multicast in NDN, this design is expected to better support large amount of data streaming between multiple users. As a proof of concept, a protoype of the system is implemented with one-way real-time 360 video streaming. Experiments show that seamless streaming and interactive playback of 360 video can be achieved with low latency. Therefore, the proposed system has the potential to provide immersive VR experience for real-time multi-party video conferencing.
-
-
Prioritized Buffer Control in Two-tier 360 Video Streaming
Fanyi Duanmu, Eymen Kurdoglu, Amir Hosseini, Yong Liu, and Yao Wang (New York University)
-
Abstract:
360 degree video compression and streaming is one of the key components of Virtual Reality (VR) applications. In 360 video streaming, a user may freely navigate through the captured 3D environment by changing her desired viewing direction. Only a small portion of the entire 360 degree video is watched at any time. Streaming the entire 360 degree raw video is therefore unnecessary and bandwidth-consuming. One the other hand, only streaming the video in the user’s current view direction will introduce streaming discontinuity whenever the user changes her view direction. In this work, a two-tier 360 video streaming framework with prioritized buffer control is proposed to effectively accommodate the dynamics in both network bandwidth and viewing direction. Through simulations driven by real network bandwidth and viewing direction traces, we demonstrate that the proposed framework can significantly outperform the conventional 360 video streaming solutions.
-
-
Ultra Wide View Based Panoramic VR Streaming
Ran Ju, Jun He, Fengxin Sun, Jin Li, Feng Li, Jirong Zhu, and Lei Han (Huawei Technologies)
-
Abstract:
Online VR streaming faces great challenges such as the high throughput and real time interaction requirement. In this paper, we propose a novel ultra wide view based method to stream high quality VR on Internet at low bandwidth and little computation cost. First, we only transmit the region where user is looking at instead of full 360 degree view to save bandwidth. To achieve this goal, we split the source VR into small grid videos in advance. The grid videos are able to reconstruct any view flexibly in user end. Second, according to the fact that users generally interact at low speed, we expand the view that user requested to meet the real time interaction requirement. Besides, a low resolution full view stream is supplied to handle exceptional cases such as high speed view change. We test our solution in an experimental network. The results show remarkable bandwidth saving of over 60
-
- Lunch Break
- Session 2
-
On the Networking Challenges of Mobile Augmented Reality
Wenxiao ZHANG (Hong Kong University of Science and Technology), Bo HAN (AT&T Labs -- Research), and Pan HUI (Hong Kong University of Science and Technology)
-
Abstract:
In this paper, we conduct a reality check for Augmented Reality (AR) on mobile devices. We dissect and measure the cloud-offloading feature for computation-intensive visual tasks of two popular commercial AR systems. Our key finding is that their cloud-based recognition is still not mature and not optimized for latency, data usage and energy consumption. In order to identify the opportunities for further improving the Quality of Experience (QoE) for mobile AR, we break down the end-to-end latency of the pipeline for typical cloud-based mobile AR and pinpoint the dominating components in the critical path.
-
-
VR is on the edge: how to deliver 360° videos in mobile networks
Simone Mangiante and Guenter Klas (Vodafone Group R&D), Amit Navon, Zhuang GuanHua, and Ju Ran (Huawei), and Marco Dias Silva (Vodafone Group R&D)
-
Abstract:
VR/AR is rapidly progressing towards enterprise and end customers with the promise of bringing immersive experience to numerous applications. Soon it will target smartphones from the cloud and 360° video delivery will need unprecedented requirements for ultra-low latency and ultra-high throughput to mobile networks. Latest developments in NFV and Mobile Edge Computing reveal already the potential to enable VR streaming in cellular networks and to pave the way towards 5G and next stages in VR technology. In this paper we present a Field Of View (FOV) rendering solution at the edge of a mobile network, designed to optimize the bandwidth and latency required by VR 360° video streaming. Preliminary test results show the immediate benefits in bandwidth saving this approach can provide and generate new directions for VR/AR network research.
-
-
VR/AR Immersive Communication: Caching, Edge Computing, and Transmission Trade-Offs
Jacob Chakareski (The University of Alabama)
-
Abstract:
We study the delivery of 360∘-navigable videos to 5G VR/AR wireless clients in future cooperative multi-cellular systems. A collection of small-cell base stations interconnected via back-haul links are sharing their caching and computing resources to maximize the aggregate reward they earn by serving 360∘ videos requested by VR/AR wireless clients. We design an efficient representation method to construct the 360∘ videos such that they only deliver the remote scene viewpoint content genuinely needed by the VR/AR user, thereby overcoming the present highly inefficient approach of sending a bulky 360∘ video, whose major part comprises scene information never accessed by the user. Moreover, we design an optimization framework that allows the base stations to select cooperative caching/rendering/streaming strategies that maximize the aggregate reward they earn when serving the users, for the given caching/computational resources at each base station. We formulate the problem of interest as integer programming, show its NP-hardness, and derive a fully-polynomial-time approximation solution with strong performance guarantees. Our advances demonstrate orders of magnitude operational efficiency gains over state-of-the-art caching and 360∘ video representation mechanisms and are very promising. This is a first-of-its-kind study to explore fundamental trade-offs between caching, computing, and communication for emerging VR/AR applications of prospectively broad societal impact.
-
-
Delivering deep learning to mobile devices via offloading
Xukan Ran and Haoliang Chen (University of California, Riverside), Zhenming Liu (College of William and Mary), and Jiasi Chen (University of California, Riverside)
-
Abstract:
Deep learning could make Augmented Reality (AR) devices smarter, but few AR apps use such technology today because it is compute-intensive while front-end devices often cannot deliver sufficient compute power. We propose a distributed framework that ties together front-end devices with more powerful backend “helpers” that allow deep learning to be executed locally or to be offloaded. The framework shall be able to intelligently use current estimates of network conditions and backend server loads, and in conjunction with application’s requirements to determine an optimal offload strategy.
This work reports our preliminary investigation in implementing this framework, in which the front-end is assumed to be smartphones. Our specific contributions include (1) development of an Android application that performs real-time object detection, either locally on the smartphone or remotely on a server; and (2) characterization of the tradeoffs between object detection accuracy, latency, and battery drain, based on the system parameters of video resolution, CNN model size, and offloading decision.
-
- Coffee Break
- Open Discussion and closing remarks
Call For Papers
Virtual Reality (VR) and Augmented Reality (AR) are expected to be the next generation Killer-Apps in the future Internet. This workshop will bring together researchers and developers working on VR/AR and networking to share their experience, exchange views, and explore future research directions. Contemporary video applications increasingly require more pixels, higher frame rate, shorter delay, and lower packet loss rate. Immersive VR/AR videos are the representative applications. VR/AR is projected to form a $162 billion market by 2020. However, many technical bottlenecks and challenges need to be overcome to facilitate the wide adoption of VR/AR. In particular, high-throughput and low-latency data transmission, realtime video generation and adaption call for significant engineering innovations, which can only happen through interdisciplinary research. The Workshop will bring together experts from different fields, including video coding, communication, networking, and system to exchange ideas on novel network architecture and protocol designs to support VR/AR, as well as novel video coding and transmission designs to deliver immersive VR/AR experience in future networks.
The ACM SIGCOMM Workshop on Virtual Reality and Augmented Reality Network aims to foster a higher representation of researchers working on VR/AR network at SIGCOMM. The workshop will also serve as a meeting point for worldwide researchers to share new ideas and experiences and to discuss the challenges linked to the development of VR/AR network.
Topics of Interest
We solicit stimulating, original, previously unpublished ideas on completed work, position papers, and/or work-in-progress papers in the form of extended abstracts. We further encourage papers that propose new research directions or could generate lively debate at the workshop. We invite submissions on a wide range of VR/AR network research topics, including, but not limited to:
- Joint video coding and transmission design
- Networking requirements and challenges of VR/AR
- Ultra-low-latency networking
- High-throughput transport for VR/AR
- Adaptive video streaming under network/user constraints
- Novel content distribution network for VR/AR
- Management of VR/AR network
- Wireless and mobile video systems
- VR/AR in 5G Networks
Submission Instructions
Submissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be at most six (6) pages long, including all figures, tables, references, and appendices in two-column 10pt ACM format. Papers must include author’s names and affiliations for single-blind peer reviewing by the PC. Authors of accepted papers are expected to present their papers at the workshop.
Paper registration and submission can be done via HotCRP at: https://sigcomm17vrar.hotcrp.com/.
Important Dates
-
March 25, 2017March 31st, 2017 11:59 PDTSubmission deadline
-
April 30, 2017Acceptance notification
-
May 31, 2017
Camera ready deadline
Authors Take Note
The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to TWO WEEKS prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.
Committees
- Workshop Co-Chairs
-
Richard Li
Future Networks@ Huawei, USA
-
Yong Liu
New York University, USA
- Steering Committee
-
Lili Qiu
University of Texas at Austin, USA
-
K.K Ramakrishnan
University of California, Riverside, USA
- Technical Program Committee
-
Fred Baker
Self-employed
-
Bo Begole
Huawei
-
Bob Briscoe
Simula
-
Jiasi Chen
University of California, Riverside
-
Nuno Correia
Universidade Nova de Lisboa
-
Sujit Dey
University of California, San Diego
-
Philip Godfrey
University of Illinois at Urbana-Champaign
-
Mansoor Hanif
BT
-
Guenter Klas
Vodafone
-
Baochun Li
University of Toronto
-
Jiangchuan Liu
Simon Fraser University
-
John Lui
The Chinese University of Hong Kong
-
Klara Nahrstedt
University of Illinois Urbana-Champaign
-
Shiv Panwar
New York University
-
Natasha Rooney
GSMA
-
Randall Stewart
Netflix
-
Ning Wang
University of Surrey
-
Yao Wang
New York University
-
Zhili Zhang
University of Minnesota