ACM SIGCOMM 2017, Los Angeles, CA

ACM SIGCOMM 2017 2nd Workshop on QoE-based Analysis and Management of Data Communication Networks (Internet-QoE 2017)

Workshop Program

  • Monday, August 21, 2017, Tesla Room (Engineering IV)

  • 8:30am - 8:45am Opening

    Room: Tesla Room (Engineering IV)

  • 8:45am - 10:30am Session 1: QoE-based Traffic Monitoring and Analysis

    Session Chair: Florian Wamser (University of Würzburg)

    Room: Tesla Room (Engineering IV)

  • Keynote: Quality of Experience Demystified

    Oliver Hohlfeld (RWTH Aachen University)


    Many Internet applications are designed to be used by humans. Yet common performance metrics applied to evaluate and optimize these applications do not consider aspects of human perception. They rather measure machine-centric performance aspects. To solve this, Quality of Experience (QoE) research aims at providing human-centric performance evaluation techniques. Yet, many myths and misconceptions exist on what QoE really is, how it can be applied, and how research can be performed in the QoE domain. These myths are often rooted in translation problems between networking and QoE research, two fields having their own language, concepts, and venues. This talk aims at bridging the gap and motivating research in the field by demystifying the concept of QoE.


    Bio: Oliver Hohlfeld is heading the network architectures group in the institute of communication and distributed systems (COMSYS) at RWTH Aachen University. He obtained his Ph.D. from TU Berlin in 2013, advised by Anja Feldmann. His current research focuses on Internet measurements, adaptive Internet architectures, and the QoE-driven evaluation of Internet architectures and applications.


  • A QoE Perspective on HTTP/2 Server Push

    Torsten Zimmermann, Benedikt Wolters, and Oliver Hohlfeld (RWTH Aachen University)

    • Abstract:

      HTTP/2 was recently standardized to optimize the Web by promising faster Page Load Times (PLT) as compared to the widely deployed HTTP/1.1. One promising feature is HTTP/2 server push, which turns the former pull-only into a push-enabled Web. By enabling servers to preemptively push resources to the clients without explicit request, it promises further improvements of the overall PLT. Despite this potential, it remains unknown if server push can indeed yield human perceivable improvements. In this paper, we address this open question by assessing server push in both i) a laboratory and ii) a crowdsourcing study. Our study assesses the question if server push can lead to perceivable faster PLTs as compared to HTTP/1.1 and HTTP/2 without push. We base this study on a set of 28 push-enabled real-word websites selected in an Internet-wide measurement. Our results reveal that our subjects are able to perceive utilization of server push. However, its usage does not necessarily accomplish perceived PLT improvements and can sometimes even be noticeably detrimental.


  • PAIN: A Passive Web Speed Indicator for ISPs

    Martino Trevisan, Idilio Drago, and Marco Mellia (Politecnico di Torino)

    • Abstract:

      Understanding the quality of web browsing enjoyed by users is key to optimize services and keep users’ loyalty. This is crucial for Internet Service Providers (ISPs) to anticipate problems. Quality is subjective, and the complexity of today’s pages challenges its measurement. OnLoad time and SpeedIndex are notable attempts to quantify web performance. However, these metrics are computed using browser instrumentation and, thus, are not available to ISPs. PAIN (PAssive INdicator) is an automatic system to observe the performance of web pages at ISPs. It leverages passive flow-level and DNS measurements which are still available in the network despite the deployment of HTTPS. With unsupervised learning, PAIN automatically creates a model from the timeline of requests issued by browsers to render web pages, and uses it to analyze the web performance in real-time. We compare PAIN to indicators based on in-browser instrumentation and find strong correlations between the approaches. It reflects worsening network conditions and provides visibility into web performance for ISPs.


  • Perceived Performance of Top Retail Webpages In the Wild

    Qingzhu Gao, Prasenjit Dey, and Parvez Ahammad (INSTART LOGIC INC)

    • Abstract:

      Clearly, no one likes webpages with poor quality of experience (QoE). Being perceived as slow or fast is a key element in the overall perceived QoE of web applications. While extensive effort has been put into optimizing web applications (both in industry and academia), not a lot of work exists in characterizing what aspects of webpage loading process truly influence human end-user’s perception of the Speed of a page. In this paper we present SpeedPerception 1, a large-scale web performance crowdsourcing framework focused on understanding the perceived loading performance of above-the-fold (ATF) webpage content. Our end goal is to create free open-source benchmarking datasets to advance the systematic analysis of how humans perceive webpage loading process.

      In Phase-1 of our SpeedPerception study using Internet Retailer Top 500 (IR 500) websites , we found that commonly used navigation metrics such as onLoad and Time To First Byte (TTFB) fail (less than 60% match) to represent majority human perception when comparing the speed of two webpages. We present a simple 3-variable-based machine learning model that explains the majority end-user choices better (with 87 ± 2% accuracy). In addition, our results suggest that the time needed by end-users to evaluate relative perceived speed of webpage is far less than the time of its visualComplete event.



  • 10:30am - 11:00am Coffee Break (Foyer)

  • 11:00am - 12:30pm Session 2: QoE-based Network Management

    Session Chair: Vyas Sekar (CMU)

    Room: Tesla Room (Engineering IV)

  • Keynote: Perspectives on Internet-QoE from the Vantage Point of a CDN Operator

    Marcelo Torres (Akamai)


    A content distribution network (CDN) delivers transactions to end-users on behalf of its customers, also known as content providers. It follows that content providers have a primary role in shaping the view CDNs have of Internet QoE. Partnerships with ISPs and carriers around the world are also critical to a CDNs ability to operate. It’s important that CDNs make decisions that meet the mutual goals of carriers and CDNs alike. As a fundamental component of the overall physical and logical infrastructure, CDNs have a responsibility to the collective Internet. The policies driven by content providers and Internet Service Providers are important factors that guide how CDNs use active and passive measurements to make instantaneous control decisions that ultimately determine an end-users Internet QoE. In this Keynote, we look at these perspectives from the vantage point of a CDN operator. We take a historical perspective on the way in which customers evaluate the performance delivered to their clients, describe measurement strategies applied by different classes of our customers, and provide some perspective on how these measurements have become increasingly more sophisticated over time. We describe elements of the Mapping control system that govern how Akamai’s CDN operates, provide some perspective on the measurements that drive the decision-making made by this control system, and how these measurements have evolved to meet a richer application specific and end-user focused Internet QoE landscape. We expect that this subject matter might help provide some insight into measurement challenges facing the industry today and in the future.


    Bio: Marcelo Torres is the Director of Mapping Performance Engineering at Akamai Technologies. Since joining Akamai in 2000, Marcelo’s focus has been on the architecture, models, algorithms, and operating policies that are central components of the control system that drives how end-users transact with the Akamai platform. Prior to joining Akamai, Marcelo was at BBN and contributed to the transformation of the BBN Planet network into a full-fledged ISP. Marcelo received his B.S. in Computer Science and Engineering from MIT and his Ph.D. in Operations Research from Stanford University.


  • Server and Content Selection for MPEG DASH Video Streaming with Client Information

    Florian Wamser, Steffen Höfner, Michael Seufert, and Phuoc Tran-Gia (University of Würzburg)

    • Abstract:

      In HTTP adaptive streaming (HAS), such as MPEG DASH, the video is split into chunks and is available in different quality levels. If the video chunks are stored or cached on different servers to deal with the high load in the network and the Quality of Experience (QoE) requirements of the users, the problem of content selection arises. In this paper, we evaluate client-side algorithms for dynamically selecting an appropriate content server during DASH video streaming. We present three algorithms with which the DASH client itself can determine the most appropriate server based on client-specific metrics, like actual latency or bandwidth to the content servers. We evaluate and discuss the proposed algorithms with respect to the resulting DASH streaming behavior in terms of buffer levels and quality level selection.


  • On the Bitrate Adaptation of Shared Media Experience Services

    Argyrios G. Tasiopoulos (University College London), Ray Atarashi (IIJ Innovation Institute), and Ioannis Psaras and George Pavlou (University College London)

    • Abstract:

      In Shared Media Experience Services (SMESs), a group of people is interested in streaming consumption in a synchronised way, like in the case of cloud gaming, live streaming, and interactive social applications. However, group synchronisation comes at the expense of other Quality of Experience (QoE) factors due to both the dynamic and diverse network conditions that each group member experiences. Someone might wonder if there is a way to keep a group synchronised while maintaining the highest possible QoE for each one of its members. In this work, at first we create a Quality Assessment Framework capable of evaluating different SMESs improvement approaches with respect to traditional metrics like media bitrate quality, playback disruption, and end user desynchronisation. Secondly, we focus on the bitrate adaptation for improving the QoE of SMESs, as an incrementally deployable end user triggered approach, and we formulate the problem in the context of Adaptive Real Time Dynamic Programming (ARTDP). Finally, we develop and apply a simple QoE aware bitrate adaptation mechanism that we compare against youtube live-streaming traces to find that it improves the youtube performance by more than 30


  • 12:30pm - 2:00pm Lunch Break (Centennial Terrace)

  • 2:00pm - 3:30pm Session 3: QoE-based Network Measurement

    Session Chair: Fabián Bustamante (Northwestern University)

    Room: Tesla Room (Engineering IV)

  • Keynote: The Building Blocks of the Netflix Experience

    Lara Deek (Netflix), Sergey Fedorov (Netflix), and Te-Yuan (TY) Huang (Netflix)


    Video streaming is a huge and growing contributor to Internet traffic. Netflix alone accounts for over 37% of peak download traffic in the United States. This talk will give an overview on how Netflix video streaming works, and how Netflix optimizes the streaming experience. We will touch upon how Netflix accelerates its control traffic, how video assets are distributed and streamed over the Internet, and how video quality is dynamically adapted to avoid interruptions. We will also talk about, generic Internet speed test that is powered by Netflix infrastructure.


  • Bio: Lara Deek works on the control plane of Open Connect: Netflix's custom CDN for video streaming. Her work is primarily at the caching layer, where she designs algorithms that decide the content of caches across different geographical zones to ensure the best streaming experience. Lara has a PhD in Computer Science from the University of California at Santa Barbara, where she worked on building resource-efficient systems for emerging wireless technology.


  • Bio: Sergey Fedorov works on Open Connect - a CDN that delivers all Netflix video traffic. He worked on tools and services to manage and monitor thousands of Open Connect servers around the world. Now Sergey tries to find ways to leverage Open Connect infrastructure to accelerate Netflix APIs. He is also an author of speed test. Prior to Netflix, Sergey spent a few years in corporate world, working for Microsoft and Intel. He holds a MS degree in Computer Science from Nizhny Novgorod State University.


  • Bio: Te-Yuan (TY) Huang works on streaming algorithms at Netflix. She designs algorithms to help users receive videos with the best possible experience, and she enjoys her research results every day by watching more videos. Prior to Netflix, TY received her Ph.D. in Computer Science from Stanford University in 2014. TY is a recipient of 2013 IRTF Applied Networking Research Prize.


  • Keynote: Unleashing the Potential of Data-Driven Networking

    Vyas Sekar (CMU)


    The last few years have witnessed the coming of age of data-driven paradigm in various aspects of computing (partly) empowered by advances in distributed system research (cloud computing, MapReduce, etc). In this paper, we observe that the benefits can flow the opposite direction: the design and management of networked systems and especially Quality of Experience can be improved by the data-driven paradigm. To this end, we present DDN, a new design framework that has the potential to significantly achieve better performance through harnessing more data than one single flow. In this talk, I will try to systematize existing instantiations of DDN by creating a unified framework for DDN, and use the framework to shed light on the common challenges and reusable design principles. We believe that by systematizing this paradigm as a broader community, we can unleash the unharnessed potential of DDN for QoE optimization.


    Bio: Vyas Sekar is an Associate Professor in the ECE Department at Carnegie Mellon University, where he is part of CyLab. His research interests lie at the intersection of networking, security, and systems. He received his Ph.D. from the Computer Science Department at Carnegie Mellon University in 2010. He earned his bachelor's degree from the Indian Institute of Technology Madras, where he was awarded the President of India Gold Medal. His work has been recognized with the NSF CAREER award, the ACM SIGCOMM Rising Star Award, the NSA Best Scientific Cybersecurity Paper Award, the CSAW Applied Security Research Prize, and multiple best paper awards (ACM SIGCOMM, ACM CoNext, and ACM Multimedia).


  • 3:30pm - 4:00pm Coffee Break (Foyer)

  • 4:00pm - 4:45pm Session 4: QoE Measurement

    Session Chair: Oliver Hohlfeld (RWTH Aachen University)

    Room: Tesla Room (Engineering IV)

  • On active sampling of controlled experiments for QoE modeling

    Muhammad Jawad Khokhar, Nawfal Abbasi Saber, Thierry Spetebroot, and Chadi Barakat (INRIA, France)

    • Abstract:

      For internet applications, measuring, modeling and predicting the quality experienced by end users as a function of network conditions is challenging. A common approach for building application specific Quality of Experience (QoE) models is to rely on controlled experimentation. For accurate QoE modeling, this approach can result in a large number of experiments to carry out because of the multiplicity of the network features, their large span (e.g., bandwidth, delay) and the time needed to setup the experiments themselves. However, most often, the space of network features in which experimentations are carried out shows a high degree of uniformity in the training labels of QoE. This uniformity, difficult to predict beforehand, amplifies the training cost with little or no improvement in QoE modeling accuracy. So, in this paper, we aim to exploit this uniformity, and propose a methodology based on active learning, to sample the experimental space intelligently, so that the training cost of experimentation is reduced. We prove the feasibility of our methodology by validating it over a particular case of YouTube streaming, where QoE is modeled both in terms of interruptions and stalling duration.


  • Designing a Survey Tool for Monitoring Enterprise QoE

    Kathrin Borchert, Matthias Hirth, and Thomas Zinner (University of Würzburg) and Anja Göritz (Freiburg University)

    • Abstract:

      Enterprise applications like SAP are part of the day-to-day work of a large number of employees. Similar to many modern applications, enterprise applications are often implemented in a distributed fashion and consequently suffer from network degradations resulting in impairments like increased loading delays. While the influence of these impairments on the perceived quality of users is well researched for consumer applications and network services, the impact of these impairments in a business environment is yet to be investigated.

      To address this gap we develop a non-intrusive software tool for continuously collecting subjective ratings on the performance of an enterprise application from a large number of employees. Based on the feedback from a company and results from two initial field studies we discuss the specific challenges when assessing the perceived quality of employees during regular working hours and point out our further research directions.


  • 4:45pm - 5:30pm Panel

    Room: Tesla Room (Engineering IV)

  • User-centric Networking - Beyond QoE and Engagement

    Florian Wamser (University of Würzburg), Oliver Hohlfeld (RWTH Aachen University), Marcelo Torres (Akamai), Te-Yuan Huang (Netflix), Lara Deek (Netflix), Sergey Fedorov (Netflix), Vyas Sekar (CMU)

Call For Papers

Measuring and analyzing complex networks such as the Internet has been the focus of many research efforts for over two decades now. While this approach has lead to an improved understanding of the Internet and an enhancement of the management and operation of large-scale complex networks, little has been done to understand and manage the traffic and the network from a user-centric perspective. QoE remains a poorly understood domain, currently restricted to small scale lab studies and very far from the analysis of real large scale traffic measurements and networks. Especially in the industry, QoE has become a buzz word, far from its reality within the research community, and partly due to the complexity involved in deploying QoE-based network analysis and management solutions.

In addition, network operators and service providers currently struggle to keep their increasingly demanding customers happy in an increasingly competitive and complex environment, while remaining profitable at the same time. The sheer development of novel end-user services demands more and better user-centric quality concepts and metrics applied to real world operational networks. We see current and future networks becoming more and more end-user experience aware, but there is still a long way to go to make of QoE one of the guiding paradigms for network design, management, and operation. As a first step, we need to better understand real networks and their traffic through the eyes of the end-user.

The goal of the 2nd ACM SIGCOMM Internet-QoE workshop is to scale QoE out of the lab studies context and bring it to the design, analysis and operation of real world networks and traffic. By fostering an explicit and deep integration of the end-user directly into the analysis and management of traffic and networks, we expect to reduce the gap between QoE research and its application to future network management paradigms, as well as to provide a more targeted end-user perspective to the research on Internet analysis and its future development.

Topics of Interest

Internet-QoE brings together researchers and practitioners from the Internet measurements and analysis domain and the QoE modeling and assessment domain, as well as industry players willing to integrate QoE aspects into the DNA of their daily business, with direct applications in network dimensioning, monitoring, management, and troubleshooting among others. We invite submissions in the following non-exhaustive list of topics:

  • QoE-aware networking
  • Internet measurements related to QoE
  • QoE characterization from Internet measurements
  • QoE metrics for smartphones & mobile networks
  • QoE-based traffic monitoring and troubleshooting
  • Novel algorithms to improve Internet QoE
  • Machine learning, data mining and big data analytics for Internet QoE analysis
  • QoE-based network management and analysis
  • QoS/QoE mapping, metrics and measurements in the large-scale
  • Application of QoE models and metrics to network and traffic analysis
  • Tools and techniques to gather QoE-related Internet measurements
  • Large-scale field trials shedding light on QoE aspects
  • Qualitative measurements for Internet traffic analysis
  • SDN for QoE-based network management
  • QoE-based analysis of CDNs and Cloud networks
  • Large-scale network simulation for QoE analysis
  • Novel approaches for large-scale QoE crowd-sourcing

Contact workshop co-chairs.

Submission Instructions

Submissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be at most six (6) pages long, including all figures, tables, references, and appendices in two-column 10pt ACM format. Papers must include authors names and affiliations for single-blind peer reviewing by the PC. Authors of accepted papers are expected to present their papers at the workshop.

Please submit your paper via

Important Dates

  • March 17, 2017 March 31, 2017

    Paper registration deadline

  • March 24, 2017 March 31, 2017

    Paper submission deadline

  • April 28, 2017

    Paper acceptance notification

  • May 19, 2017 May 31, 2017

    Camera ready deadline

Authors Take Note

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to TWO WEEKS prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.