ACM SIGCOMM 2020, New York City, USA

ACM SIGCOMM 2020 Workshop on Hot Topics in Video Analytics and Intelligent Edges (HotEdgeVideo 2020)

This workshop has an associated Slack channel for discussions. Click on the link below to visit it. If you're asked to sign in, use the workspace name "" to sign up or sign in.

Go to workshop Slack channel

Workshop program

The program below is also available at

  • Monday, August 10, 2020 EDT

  • 10:00 - 10:45 am EDT Keynote 1

  • Multi-Scale GPU Resource Management for Deep Learning

    Mosharaf Chowdhury (University of Michigan)

    • Abstract: GPUs have emerged as a popular choice for deep learning. To deal with ever-growing datasets, it is also common to use multiple GPUs in parallel for distributed deep learning. Although achieving cost-effectiveness in these clusters relies on efficient sharing, modern GPU hardware, deep learning frameworks, and cluster managers are not designed for efficient, fine-grained sharing of GPU resources. In this talk, I will present our recent works on efficient GPU resource management, both within a single GPU and across many GPUs in a cluster.
      I will start at the macroscale and present Tiresias first, which is a GPU cluster manager to reduce the average job completion times of training jobs that schedules jobs in an information-constrained scenario and assigns jobs to GPUs based on their model characteristics. It introduces two novel schedulers: one that relies on partial information and one that is information-agnostic, that allow Tiresias to perform similar to that of schedulers with perfect knowledge. Next, we will focus on the micro scale and present Salus, which enables fine-grained sharing of individual GPUs. It is an execution service that enforces fine-grained sharing via two primitives: fast job switching and memory sharing. I will show how these primitives can be used to implement diverse GPU sharing policies. Salus improves the utilization of an individual GPU by 2X to 3X for hyper-parameter tuning and training; for inference applications, it improves by 7X over NVIDIA MPS with small performance overhead.
      Both Tiresias and Salus are open-source and available at

      Bio: Mosharaf Chowdhury is an Assistant Professor in the EECS Department at the University of Michigan, Ann Arbor. He received his PhD from the AMPLab at UC Berkeley in 2015. His current research focuses on application-infrastructure symbiosis across different layers of software and hardware stacks. Mosharaf invented coflows and is a co-creator of Apache Spark. Software artifacts from his research have been deployed in Microsoft, Facebook, Google, and Amazon datacenters. He has received an NSF CAREER award, the 2015 ACM SIGCOMM doctoral dissertation award, multiple faculty fellowships and awards from Google, VMware, and Alibaba, an NSDI best paper award, as well as a Facebook fellowship and a Cheriton scholarship. He had also been nominated for an NSDI community award and a University of Waterloo alumni gold medal.


  • 10:45 - 11:00 am EDT Break

  • 11:00 - 11:30 am EDT Smart Cameras and DNNs

  • 11:00 - 11:15 am EDT

    Chameleon: Self-Adaptation of Video Analytics Model to Individual Surveillance Camera Environments

    Authors: Taewan Kim, Chunghun Kang, Beomjun Kim, Yongsung Kim, Seungji Yang, Kyungnam Kim (SK telecom)

  • 11:15 - 11:30 am EDT

    Compress Images with DNN for AIoT Cameras

    Authors: Pan Hu, Junha Im, Sachin Katti (Stanford University)

  • 11:30 - 11:59 am EDT Invited Talk 1

  • Edge Video Analytics: Scalability, Efficiency and Automation

    Ravi Iyer (Intel)

    • Abstract: The rapid growth of video analytics has made edge computing research especially interesting and challenging. In this talk, I will discuss scalability, efficiency and automation challenges and opportunities for edge video analytics. On scalability, I will describe the growing amount of raw visual data as well as meta data that needs to be stored for real-time and offline analytics. I will show the design of a scalable video data management system and demonstrate a few use cases to highlight the efficacy of such a system. On efficiency, I will describe the need for increasing stream and analytics density and how HW/SW co-design and acceleration can be applied to make this happen. On automation, I will describe the need for tools to automate the E2E workflow for video analytics at the edge and show examples ranging from addressing labeling challenges to automating model optimization on target edge platforms. I will finally outline some open research challenges and opportunities for future edge video analytics.

      Bio: Ravi Iyer is an Intel Fellow and Sr. Director of the Emerging Systems Lab in Intel Labs. His research interests are in driving innovative systems, architectures and technologies for emerging workloads and edge/cloud infrastructure. He has published 150+ papers in areas such as SoC architectures (from edge devices to cloud servers), novel (visual/speech/AI) accelerators, cache/memory hierarchies, QoS and performance analysis of emerging workloads. He has also filed 70+ patents. He received his PhD in Computer Science from Texas A&M University. He is also an IEEE Fellow.


  • 12:00 - 1:00 pm EDT Lunch

  • 1:00 - 1:45 pm EDT Keynote 2

  • A talk in five graphs

    Keith Winstein (Stanford University)

    • Abstract: Cameras capture tons of video each day, with limits on resources for communication and for computation (at the camera or elsewhere). This makes it a real challenge to make use of all that data, with a lot of interesting tradeoffs to be navigated. No single solution is likely to be a silver bullet. How can we understand where in the landscape a given system lies? In this talk, I'll discuss five tradeoffs -- that is, five pairs of axes to put on a graph -- that I propose may be helpful in characterizing and understanding the contribution of systems in this research domain.

      Bio: Keith Winstein is an assistant professor of computer science and, by courtesy, of electrical engineering at Stanford University. His research group creates new kinds of networked systems by rethinking abstractions around communication, compre,ssion, and computing (


  • 1:45 - 2:15 pm EDT Invited Talk 2

  • Managing Edge compute cost for Live Video Analytics (and our integration with Microsoft Rocket)

    Avi Kewalramani (Microsoft)

    • Abstract: Unprecedented advances in computer vision and machine learning have opened opportunities for video analytics applications that are of wide-spread interest to society, science, and business. While computer vision models have become more accurate and capable, they are also becoming resource-hungry and expensive for 24/7 analysis of video. As a result, live video analytics across multiple cameras also means a large computational footprint on premises built with a good amount of expensive edge compute hardware (CPU, GPU etc.).
      Total cost of ownership (TCO) for video analytics is an important consideration and pain point for our customers. With that in mind, we integrated Live Video Analytics from Azure Media Services and Microsoft Rocket (from Microsoft Research) to enable an order-of-magnitude improvement in throughput per edge core (frame per second analyzed per CPU/GPU core), while maintaining the accuracy of the video analytics insights.

      Bio: Avi Kewalramani is a customer and business focused product manager for Microsoft’s Live Video Analytics platform. In the past, he has launched Azure Storage Events, Azure Change Feed and co-launched Azure Data Lake Gen 2. Prior to Product, he was a software engineer and engineering manager focused on big data and bioinformatics.


Call for Papers

Cameras are everywhere! Analyzing live videos from these cameras has great potential to impact science and society. Enterprise cameras are deployed for a wide variety of commercial and security reasons. Consumer devices themselves have cameras with users interested in analyzing live videos from these devices. We are all living in the golden era for computer vision and AI that is being fueled by game-changing systemic infrastructure advancements, breakthroughs in machine learning, and copious training data, largely improving their range of capabilities. Live video analytics has the potential to impact a wide range of verticals ranging from public safety, traffic efficiency, infrastructure planning, entertainment, and home safety.

Analyzing live video streams is arguably the most challenging of domains for "systems-for-AI". Unlike text or numeric processing, video analytics require higher bandwidth, consume considerable compute cycles for processing, necessitate richer query semantics, and demand tighter security & privacy guarantees. Video analytics has a symbiotic relationship with edge compute infrastructure. Edge computing makes compute resources available closer to the data sources (i.e., cameras). All aspects of video analytics call to be designed “green-field”, from vision algorithms, to the systems processing stack and networking links, and hybrid edge-cloud infrastructure. Such a holistic design will enable the democratization of live video analytics such that any organization with cameras can obtain value from video analytics.

Topics of Interest

This workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. Topics of interest include (but not limited to) the following:
  • Low-cost video analytics
  • Deployment experience with large array of cameras
  • Storage of video data and metadata
  • Interactive querying of video streams
  • Network design for video streams
  • Hybrid cloud architectures for video processing
  • Scheduling for multi-tenant video processing
  • Training of vision neural networks
  • Edge-based processor architectures for video processing
  • Energy-efficient system design for video analytics
  • Intelligent camera designs
  • Vehicular and drone-based video analytics
  • Tools and datasets for video analytics systems
  • Novel vision applications
  • Video analytics for social good
  • Secure processing of video analytics
  • Privacy-preserving techniques for video processing

Submission Instructions

Submissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be no longer than five (5) pages, including all figures, tables, followed by as many pages as necessary for bibliographic references. Submissions should be in two-column 10pt ACM format with authors names and affiliations for single-blind peer review. The workshop also solicits the submission of research, platform, and product demonstrations. Demo submission should be a summary or extended abstract describing the research to be presented, maximum one (1) page with font no smaller than 10 point size, in PDF file format. Demo submission title should begin with "Demo:".

Authors of accepted papers are expected to present their work at the workshop. Papers accepted for presentation will be published in the SIGCOMM Workshop Proceedings, and available at the ACM Digital Library. You may find these LaTeX and MS-Word templates useful in complying with the above requirements.

Submit your work at

Important Dates

  • May 11, 2020 11:59 PST

    Paper submission deadline

  • May 31, 2020 11:59 PST

    Paper acceptance notification

  • June 10, 2020 11:59 PST

    Camera-ready papers due

  • August 10, 2020


Program Committee

  • Program Committee
  • Ganesh Ananthanarayanan (co-chair)

    Microsoft Research

  • Junchen Jiang

    University of Chicago

  • Yunxin Liu (co-chair)

    Microsoft Research

  • Padmanabhan (Babu) Pillai

    Intel Labs and CMU

  • Yuanchao Shu (co-chair)

    Microsoft Research

  • Chenren Xu

    Peking University

  • Fengyuan Xu

    Nanjing University

  • Harry Xu

    University of California, Los Angeles

  • Mi Zhang

    Michigan State University