Point Cloud Geometry Compression using Parameterized Graph Fourier Transform
Hinata Kirihara, Shoichi Ibuki, Takuya Fujihashi (Osaka University); Toshiaki Koike-Akino (Mitsubishi Electric Research Laboratories); Takashi Watanabe (Osaka University)
15 mins talk + 3 mins Q&A
2:15PM - 2:45PM
Poster Session
Real-Time Path Tracing Rendering with Stable Diffusion-Based Denoiser for Video Games
Tianyu Tu, Zhili He (Wuhan University); Tianyi Liu (University of Texas at San Antonio); Ye Wang, Xiaobo Zhou (University of Macau); Dazhao Cheng, Chuang Hu (Wuhan University)
Analyzing the Impact of L4S on Video Conferencing Performance over 5G Networks
Nawel Alioua, Elizabeth M. Belding (University of California Santa Barbara)
Power Management via Scheduling and Governing Co-Optimization for Video Frame Rendering
Standardizing Multimedia QoE Telemetry from Telecommunications Networks for Open Analytics
Yifan Wang, Minzhao Lyu, Vijay Sivaraman (University of New South Wales)
15 mins talk + 3 mins Q&A
Real-time Dynamic SLAM Using RGB Cameras for Mobile Augmented Reality
Shneka Muthu Kumara Swamy, Qi Han (Colorado School of Mines)
15 mins talk + 3 mins Q&A
Online Mitigation of Spatial Drift of Virtual Objects in Mobile Augmented Reality
Shneka Muthu Kumara Swamy, Qi Han (Colorado School of Mines)
15 mins talk + 3 mins Q&A
3:45PM - 4:00PM
Closing Remarks
Fangxin Wang & Zili Meng
Call for Papers
Multimedia has played a significant role in driving Internet usage and has led to a range of technological advancements such as content delivery networks, compression algorithms, and streaming protocols. With emerging applications, including (not limited to) augmented and virtual reality (AR/VR), real-time conferencing, AI-generated content, and video analytics, multimedia is undergoing a fundamental shift in sharing experiences online and continues to drive the future of the Internet. Techniques developed for traditional video streaming should be revisited in light of these next-generation ultra-low-latency, interactive, and immersive technologies and require us to introduce new ways to optimize and innovate emerging multimedia systems. This workshop will bring together experts from different fields, including traditional video streaming research, source video coding, analytics, rate adaptation algorithms, networked systems, and immersive media such as 3D, volumetric video streaming, AR/VR applications, to exchange ideas on identifying the challenges and opportunities in designing networked systems for these emerging multimedia technologies.
Built on the successful Emerging Multimedia System (EMS) workshop from SIGCOMM’23, this workshop calls for research on various issues and solutions that can enable live video analytics with the role of edge computing. Topics of interest include (but not limited to) the following:
Networked systems for immersive content capture, streaming, display
Networked systems for AI-driven video applications
Networked systems for multimedia generative AI
Machine learning for emerging multimedia distribution
Ultra-low-latency networking for AR/VR applications
High-throughput transport and distribution for emerging media
Adaptive streaming under network/user constraints for immersive media
Novel content distribution network for AR/VR applications
Management of AR/VR networked systems
Wireless and mobile immersive systems
AR/VR applications in 5G & 6G wireless networks
Compression and transmission design for 3D content
Edge cloud systems for immersive experiences
Quality of Experience for emerging multimedia
Security and privacy in AR/VR applications
In the meantime, besides the typical short paper that presents a complete idea with proper evaluation, we also welcome the following types of papers:
Early-stage visionary papers and position papers
Retrospectives on the evolution of multimedia networked systems
Experience papers of operation
Papers describing datasets, libraries, open source, etc.
The workshop combines presentations of new research in short papers and lightning papers. We accept the following types of submissions:
Short papers are publications that present new research that has yet to be previously published. For a short paper to be considered for publication, please submit work describing early/emerging results in a relevant topic area. Position papers are also welcome. There is a 6-page limit for short papers, including figures, tables, and any appendices and references.
Lightning papers can provide a summary of early, emerging, or ongoing work as well as short updates of previously published work. This type of submission will be presented in a demo style and will not be published in the proceedings. For a lightning paper to be considered for presentation, please submit an extended abstract that is no longer than 2 pages, with a maximum of one additional page for references only.
Submission Instructions
Submissions must be original, unpublished work, and not in submission to other venues. Submitted papers must be at most six pages long, including all figures, tables, references, and appendices in two-column 10pt ACM format (https://github.com/scyue/latex-sigcomm18). All submissions are double-blind, in either of the following two formats.
Short papers: 6 pages, including figures, tables, and any appendices and references.
Lightning paper: 2 pages, with a maximum of one additional page for references only.