ACM SIGCOMM 2019, Beijing, China
MENU

ACM SIGCOMM 2019 Program

  • Monday, August 19, 2019

  • 8:00 am - 5:00 pm Start Registration

  • 8:30 am - 10:00 am Workshops and Tutorials: Early Morning Session

  • Workshops: NEAT and MAGESys

  • Tutorials: POWDER and NDNoT

  • 10:00 am - 10:30 am Tea/Coffee Break

  • 10:30 am - 12:00 pm Workshops and Tutorials: Late Morning Session

  • Workshops: NEAT and MAGESys

  • Tutorials: POWDER and NDNoT

  • 12:00 pm - 1:30 pm Lunch

    Location: Valley Wing Ballroom A+B, Valley Wing Ballroom

  • 1:30 pm - 3:00 pm Workshops and Tutorials: Early Afternoon Session

  • Workshops: NEAT, MAGESys and OptSys

  • Tutorial: NDNoT

  • 3:00 pm - 3:30 pm Tea/Coffee Break

  • 3:30 pm - 5:00 pm Workshops and Tutorials: Late Afternoon Session

  • Workshops: NEAT, MAGESys and OptSys

  • Tutorial: NDNoT

  • 5:00 pm - 6:15 pm Topic Preview 1

  • 6:30 pm - 9:00 pm Welcome Reception

    Location: Valley Wing Ballroom

  • Tuesday, August 20, 2019

  • 8:00 am - 5:00 pm Start Registration

  • 9:00 am - 10:15 am Opening & Keynote (TBD)

  • 10:15 am - 10:45 am Tea/Coffee Break

  • 10:45 am - 12:00 pm Technical Session 1: New Ways to Operate Networks

  • Enabling a Permanent Revolution in Internet Architecture

    James McCauley, Yotam Harchol (UC Berkeley), Aurojit Panda (NYU), Barath Raghavan (USC), Scott Shenker (UC Berkeley and ICSI)

    • Abstract: Recent Internet research has been driven by two facts and their contradictory implications: the current Internet architecture is both inherently flawed (so we should explore radically different alternative designs) and deeply entrenched (so we should restrict ourselves to backwards-compatible and therefore incrementally deployable improvements). In this paper, we try to reconcile these two perspectives by proposing a backwards-compatible architectural framework in which one can incrementally deploy radically new designs. We show how this can lead to a permanent revolution in Internet architecture by (i) easing the deployment of new architectures and (ii) allowing multiple coexisting architectures to be used simultaneously by applications. By enabling both architectural evolution and architectural diversity, this would create a far more extensible Internet whose functionality is not defined by a single narrow waist, but by the union of many coexisting architectures. By being incrementally deployable, our design is not just an interesting but unrealistic clean-slate design, but a step forward that is clearly within our reach.

       

  • Bridging the Data Charging Gap in the Cellular Edge

    Yuanjie Li, Kyu-Han Kim, Christina Vlachou, Junqing Xie (Hewlett Packard Labs)

    • Abstract: The 4G/5G cellular edge promises low-latency experiences anywhere, anytime. However, data charging gaps can arise between the cellular operators and edge application vendors, and cause over-/under-billing. We find that such gap can come from data loss, selfish charging, or both. It can be amplified in the edge, due to its low-latency requirements. We devise TLC, a Trusted, Loss-tolerant Charging scheme for the cellular edge. In its core, TLC enables loss-selfishness cancellation to bridge the gap, and constructs publicly verifiable, cryptographic proof-of-charging for mutual trust. We implement TLC with commodity edge nodes, OpenEPC and small cells. Our experiments in various edge scenarios validate TLC's viability of reducing the gap with marginal latency and other overhead.

       

  • TEAVAR: Striking the Right Utilization-Availability Balance in WAN Traffic Engineering

    Jeremy Bogle, Nikhil Bhatia, Manya Ghobadi (MIT), Ishai Menache, Nikolaj Bjorner (Microsoft Reserach), Asaf Valadarsky, Michael Schapira (Hebrew University)

    • Abstract: To keep up with the continuous growth in demand, cloud providers spend millions of dollars augmenting the capacity of their wide-area backbones and devote significant effort to efficiently utilizing WAN capacity. A key challenge is striking a good balance between network utilization and availability, as these are inherently at odds; a highly utilized network might not be able to withstand unexpected traffic shifts resulting from link/node failures. We advocate a novel approach to this challenge that draws inspiration from financial risk theory: leverage empirical data to generate a probabilistic model of network failures and maximize bandwidth allocation to network users subject to an operator-specified availability target (e.g., 99.9% availability). Our approach enables network operators to strike the utilization-availability balance that best suits their goals and operational reality. We present TEAVAR (Traffic Engineering Applying Value at Risk), a system that realizes this risk management approach to traffic engineering (TE). We compare TEAVAR to state-of-the-art TE solutions through extensive simulations across many network topologies, failure scenarios, and traffic patterns, including benchmarks extrapolated from Microsoft's WAN. Our results show that with TEAVAR, operators can support up to twice as much throughput as state-of-the-art TE schemes, at the same level of availability.

       

  • 12:00 pm - 1:30 pm Lunch

    Location: Valley Wing Ballroom A+B, Valley Wing Ballroom

  • 1:30 pm - 3:10 pm Technical Session 2: Transport and Congestion

  • HPCC: High Precision Congestion Control

    Yuliang Li (Harvard University and Alibaba Group), Rui Miao, Hongqiang Liu, Yan Zhuang, Fei Feng, Lingbo Tang, Zheng Cao, Ming Zhang (Alibaba Group), Frank Kelly (University of Cambridge), Mohammad Alizadeh (MIT), Minlan Yu (Harvard University)

    • Abstract: Congestion control (CC) is the key to achieving ultra-low latency, high bandwidth and network stability in high-speed networks. From years of experience operating large scale and high-speed RDMA networks, we find the existing high-speed CC schemes have inherent limitations for reaching these goals. In this paper, we present HPCC (High Precision Congestion Control), a new high-speed CC mechanism which achieves the three goals simultaneously. HPCC leverages in-network telemetry (INT) to obtain precise link load information and controls traffic precisely. By addressing challenges such as delayed INT information during congestion and overreaction to INT information, HPCC can quickly converge to utilize free bandwidth while avoiding congestion, and can maintain near-zero in-network queues for ultra-low latency. HPCC is also fair and easy to deploy in hardware. We implement HPCC with commodity programmable NICs and switches. In our evaluation, compared to DCQCN and TIMELY, HPCC shortens flow completion time by up to 95%, causing little congestion even under large-scale incasts.

       

  • Pluginizing QUIC

    Quentin De Coninck, François Michel, Maxime Piraux, Florentin Rochet, Thomas Given-Wilson (UCLouvain), Axel Legay (UCLouvain, Aalborg University), Olivier Pereira, Olivier Bonaventure (UCLouvain)

    • Abstract: Application requirements evolve over time and the underlying protocols need to adapt. Most transport protocols evolve by negotiating protocol extensions during the handshake. Experience with TCP shows that this leads to delays of several years or more to widely deploy standardized extensions. In this paper, we revisit the extensibility paradigm of transport protocols. We base our work on QUIC, a new transport protocol that encrypts most of the header and all the payload of packets, which makes it almost immune to middlebox interference. We propose Pluginized QUIC (PQUIC), a framework that enables QUIC clients and servers to dynamically exchange protocol plugins that extend the protocol on a per-connection basis. These plugins can be transparently reviewed by external verifiers and hosts can refuse non-certified plugins. Furthermore, the protocol plugins run inside an environment that monitors their execution and stops malicious plugins. We demonstrate the modularity of our proposal by implementing and evaluating very different plugins ranging from connection monitoring to multipath or Forward Erasure Correction. Our results show that plugins achieve expected behavior with acceptable overhead. We also show that these plugins can be combined to add their functionalities to a PQUIC connection.

       

  • Gentle Flow Control: Avoiding Deadlock in Lossless Networks

    Kun Qian, Wenxue Cheng, Tong Zhang, Fengyuan Ren (Tsinghua University)

    • Abstract: Many applications in distributed systems rely on underlying lossless networks to achieve required performance. Existing lossless network solutions propose different hop-by-hop flow controls to guarantee zero packet loss. However, another crucial problem called network deadlock occurs concomitantly. Once the system traps in a deadlock, a large part of network would be disabled. Existing deadlock avoidance solutions focus all their attentions on breaking the cyclic buffer dependency to eliminate circular wait (one necessary condition of deadlock). These solutions, however, impose many restrictions on network configurations and side-effects on performance. In this work, we explore a brand-new perspective to solve network deadlock: avoiding it hold and wait situation (another necessary condition). Experimental observations tell that frequent pause on upstream ports driven by existing flow control schemes is the root cause of it hold and wait. We propose Gentle Flow Control (GFC) to manipulate the port rate at a fine granularity, so all ports can keep packets flowing even cyclic buffer dependency exists, and prove GFC can eliminate deadlock theoretically. We also present how to implement GFC in mainstream lossless networks (Converged Enhanced Ethernet and InfiniBand) with moderate modifications. Furthermore, testbed experiments and packet-level simulations validate GFC can efficiently avoid deadlock and introduce less than 0.5\% of bandwidth occupation.

       

  • SocksDirect: Datacenter Sockets can be Fast and Compatible

    Bojie Li (USTC and Microsoft Research), Tianyi Cui (University of Washington), Zibo Wang (USTC and Microsoft Research), Wei Bai, Lintao Zhang (Microsoft Research)

    • Abstract: Communication intensive applications in hosts with multicore CPU and high speed networking hardware often put considerable stress on the native socket system in an OS. Existing socket replacements often leave significant performance on the table, as well have limitations on compatibility and isolation. In this paper, we describe SocksDirect, a user-space high performance socket system. SocksDirect is fully compatible with Linux socket and can be used as a drop-in replacement with no modification to existing applications. To achieve high performance, SocksDirect leverages RDMA and shared memory (SHM) for inter-host and intra-host communication, respectively. To bridge the semantics gap between socket and RDMA / SHM, we optimize for the common cases while maintaining compatibility in general. SocksDirect achieves isolation by employing a trusted monitor daemon to handle control plane operations such as connection establishment and access control. The data plane is peer-to-peer between processes, in which we remove multi-thread synchronization, buffer management, large payload copy and process wakeup overheads in common cases. Experiments show that SocksDirect achieves 7-~20x better message throughput and 17-~35x better latency than Linux socket, and reduces Nginx HTTP latency to 1/5.5.

       

  • 3:10 pm - 3:40 pm Tea/Coffee Break

  • 3:40 pm - 5:20 pm Technical Session 3: Measurement

  • Zooming in on Wide-area Latencies to a Global Cloud Provider

    Yuchen Jin (Microsoft/UWashington), Sundararajan Renganathan, Ganesh Ananthanarayanan (Microsoft), Junchen Jiang (University of Chicago), Venkata N. Padmanabhan, Manuel Schroder, Matt Calder (Microsoft) Arvind Krishnamurthy (UWashington)

    • Abstract: The network communications between the cloud and the client have become the weak link for global cloud services that aim to provide low latency services to their clients. In this paper, we first characterize WAN latency from the viewpoint of a large cloud provider Azure, whose network edges serve hundreds of billions of TCP connections a day across hundreds of locations worldwide. In particular, we focus on instances of latency degradation and design a tool, BlameIt, that enables cloud operators to localize the cause (i.e., faulty AS) of such degradation. BlameIt uses passive diagnosis, using measurements of existing connections between clients and the cloud locations, to localize the cause to one of cloud, middle, or client segments. Then it invokes selective active probing (within a probing budget) to localize the cause more precisely. We validate BlameIt by comparing its automatic fault localization results with that arrived at by network engineers manually, and observe that BlameIt correctly localized the problem in all the 88 incidents. Further, BlameIt issues 72x fewer active probes than a solution relying on active probing alone, and is deployed in production at Azure.

       

  • RF-based Inertial Measurement

    Chenshu Wu, Feng Zhang, Yusen Fan, K. J. Ray Liu (University of Maryland, College Park)

    • Abstract: Inertial measurements are critical to almost any mobile applications. It is usually achieved by dedicated sensors (e.g., accelerometer, gyroscope) that suffer from significant accumulative errors. This paper presents RIM, an RF-based Inertial Measurement system for precise motion processing. RIM turns a commodity WiFi device into an Inertial Measurement Unit (IMU) that can accurately track moving distance, heading direction, and rotating angle, requiring no additional infrastructure but a single arbitrarily placed Access Point (AP) whose location is unknown. RIM makes three key technical contributions. First, it presents a spatial-temporal virtual antenna retracing scheme that leverages multipath profiles as virtual antennas and underpins measurements of distance and orientation using commercial WiFi. Second, it introduces a super-resolution virtual antenna alignment algorithm that resolves sub-centimeter movements. Third, it presents an approach to handle measurement noises and thus delivers an accurate and robust system. Our experiments, over a multipath rich area of >1,000 m2 with one single AP, show that RIM achieves a median error in moving distance of 2.3 cm and 8.4 cm for short-range and long-distance tracking respectively, and 6.1◦ mean error in heading direction, all significantly outperforming dedicated inertial sensors. We also demonstrate multiple RIM-enabled applications with great performance, including indoor tracking, handwriting, and gesture control.

       

  • A Large-Scale Analysis of Deployed Traffic Differentiation Practices

    Fangfan Li (Northeastern University), Arian Akhavan Niaki (University of Massachusetts Ahmerst), David Choffnes (Northeastern University), Phillipa Gill (University of Massachusetts Amherst), Alan Mislove (Northeastern University)

    • Abstract: Net neutrality has been the subject of considerable public debate over the past decade. Despite the potential impact on content providers and users, there is currently a lack of tools or data for stakeholders to independently audit the net neutrality policies of network providers. In this work, we address this issue by conducting a one-year study of content-based traffic differentiation policies deployed in operational networks, using results from 1,045,413 crowdsourced measurements conducted by 126,249 users across 2,735 ISPs in 183 countries/regions. We develop and evaluate a methodology that combines individual per-device measurements to form high-confidence, statistically significant inferences of differentiation practices, including fixed-rate bandwidth limits (i.e., throttling) and delayed throttling practices. Using this approach, we identify differentiation in both cellular and WiFi networks, comprising 30 ISPs in 7 countries. We also investigate the impact of throttling practices on video streaming resolution for several popular video streaming providers.

       

  • Residential Links Under the Weather

    Ramakrishna Padmanabhan (CAIDA, UCSD), Aaron Schulman (UCSD), Dave Levin , Neil Spring (University of Maryland)

    • Abstract: Weather is a leading threat to the stability of our vital infrastructure. Last-mile Internet is no exception. Yet, unlike other vital infrastructure, weather's effect on last-mile Internet outages is not well understood. This work is the first attempt to quantify the effect of weather on residential outages. Investigating outages in residential networks due to weather is challenging because residential Internet is heterogeneous: there are different media types, different protocols, and different providers, in varying contexts of different local climate and geography. Sensitivity to these different factors leads to narrow categories when estimating how weather affects these different links. To address these issues we perform a large-scale study looking at eight years of active outage measurements that were collected across the bulk of the last mile Internet infrastructure in the United States.

       

  • 6:30 pm - 9:00 pm Student Dinner (by invitation only)

    Location: TBD

  • Wednesday, August 21, 2019

  • 8:00 am - 5:00 pm Start Registration

  • 9:00 am - 10:15 am 50th-anniversary Panel

  • 10:15 am - 10:45 am Tea/Coffee Break

  • 10:45 am - 12:00 pm Technical Session 4: Building on New Physical Layers

  • A Link Layer Protocol for Quantum Networks

    Axel Dahlberg, Matthew Skrzypczyk, Tim Coopmans, Leon Wubben, Filip Rozpedek, Matteo Pompili, Arian Stolk, Przemysław Pawełczak (QuTech, TU Delft), Rob Knegjens, Julio de Oliveira Filho (QuTech, TNO), Ronald Hanson, Stephanie Wehner (QuTech, TU Delft)

    • Abstract: Quantum communication brings radically new capabilities that are provably impossible to attain in any classical network. Here, we take the first step from a physics experiment to a fully fledged quantum internet system. We propose a functional allocation of a quantum network stack and construct the first physical and link layer protocols that turn ad-hoc physics experiments producing heralded entanglement between quantum processors into a well-defined and robust service. This lays the groundwork for designing and implementing scalable control and application protocols in platform-independent software. To design our protocol, we identify use cases, as well as fundamental and technological design considerations of quantum network hardware, illustrated by considering the state-of-the-art quantum processor platform available to us (Nitrogen-Vacancy (NV) centers in diamond). Using a purpose built discrete event simulator for quantum networks, we examine the robustness and performance of our protocol using extensive simulations on a supercomputing cluster. We perform a full implementation of our protocol, where we successfully validate the physical simulation model against data gathered from the NV hardware. We first observe that our protocol is robust even in a regime of exaggerated losses of classical control messages with only little impact on the performance of the system. We proceed to study the performance of our protocols for 169 distinct simulation scenarios, including tradeoffs between traditional performance metrics such as throughput and the quality of entanglement. Finally, we initiate the study of quantum network scheduling strategies to optimize protocol performance for different use cases.

       

  • A Millimeter Wave Network for Billions of Things

    Mohammad Hossein Mazaheri, Soroush Ameli, Ali Abedi, Omid Abari (University of Waterloo)

    • Abstract: With the advent of the Internet of Things (IoT), billions of new connected devices will come online, placing a huge strain on today's WiFi and cellular spectrum. This problem will be further exacerbated by the fact that many of these IoT devices are low-power devices that use low-rate modulation schemes and therefore do not use the spectrum efficiently. Millimeter wave (mmWave) technology promises to revolutionize wireless networks and solve spectrum shortage problem through the usage of massive chunks of high-frequency spectrum. However, adapting this technology presents challenges. Past work has addressed challenges in using mmWave for emerging applications, such as 5G, virtual reality and data centers, which require multiple-gigabits-per-second links, while having substantial energy and computing power. In contrast, this paper focuses on designing a mmWave network for low-power, low-cost IoT devices. We address the key challenges that prevent existing mmWave technology from being used for such IoT devices. First, current mmWave radios are power hungry and expensive. Second, mmWave radios use highly directional antennas to search for the best beam alignment. Existing beam searching techniques are complex and require feedback from access point (AP), which makes them unsuitable for low-power, low-cost IoT devices. We present mmX, a novel mmWave network that addresses existing challenges in exploiting mmWave for IoT devices. We implemented mmX and evaluated it empirically.

       

  • Underwater Backscatter Networking

    JunSu Jang, Fadel Adib (MIT)

    • Abstract: We present Piezo-Acoustic Backscatter (PAB), the first technology that enables backscatter networking in underwater environments. PAB relies on the piezoelectric effect to enable underwater communication and sensing at near-zero power. Its architecture is inspired by radio backscatter which works well in air but cannot work well underwater due to the exponential attenuation of radio signals in water. PAB nodes harvest energy from underwater acoustic signals using piezoelectric interfaces and communicate by modulating the piezoelectric impedance. Our design introduces innovations that enable concurrent multiple access through circuit-based frequency tuning of backscatter modulation and a MAC that exploits the properties of PAB nodes to deliver higher network throughput and decode network collisions. We built a prototype of our design using custom-designed, mechanically fabricated transducers and an end-to-end battery-free hardware implementation. We tested our nodes in large experimental water tanks at the MIT Sea Grant. Our results demonstrate single-link throughputs up to 3 kbps and power-up ranges up to 10 m. Finally, we show how our design can be used to measure acidity, temperature, and pressure. Looking ahead, the system can be used in ocean exploration, marine life sensing, and underwater climate change monitoring.

       

  • 12:00 pm - 1:30 pm Lunch

    Location: Valley Wing Ballroom A+B, Valley Wing Ballroom

  • 12:00 pm - 1:15 pm Topic Preview 2

  • 1:30 pm - 2:45 pm Technical Session 5: Formal Network Analysis

  • Validating Datacenters at Scale

    Karthick Jayaraman (Microsoft), Nikolaj Bjorner (Microsoft Research) ,Jitu Padhye, Amar Agrawal, Ashish Bhargava, Paul-Andre C Bissonnette, Shane Foster, Andrew Helwer, Mark Kasten, Ivan Lee, Anup Namdhari, Haseeb Niaz, Aniruddha Parkhi, Hanukumar Pinnamraju, Adrian Power, Neha Milind Raje, Parag Sharma (Microsoft)

    • Abstract: We describe our experiences using formal methods and automated theorem proving for network operation at scale. The experiences are based on developing and applying the SecGuru and RCDC (Reality Checker for Data-Centers) tools in Azure. SecGuru has been used since 2013 and thus, is arguably a pioneering industrial deployment of network verification. SecGuru is used for validating ACLs and more recently RCDC checks forwarding tables at Azure scale. A central technical angle is that we use local contracts and local checks, that can be performed at scale in parallel, and without maintaining global snapshots, to validate global properties of data-center networks. Specifications leverage declarative encodings of configurations and automated theorem proving for validation. We describe how intent is automatically derived from network architectures and verification is incorporated as prechecks for making changes, live monitoring, and for evolving legacy policies. We document how network verification, grounded in architectural constraints, can be integral to operating a reliable cloud at scale.

       

  • Safely and Automatically Updating In-Network ACL Configurations with Intent Language

    Bingchuan Tian (Nanjing University), Xinyi Zhang (University of California Santa Barbara), Ennan Zhai, Hongqiang Harry Liu, Qiaobo Ye, Chunsheng Wang, Xin Wu, Zhiming Ji, Yihong Sang, Ming Zhang (Alibaba Group), Da Yu (Brown University), Chen Tian (Nanjing University), Haitao Zheng, Ben Y. Zhao (University of Chicago)

    • Abstract: In-network Access Control List (ACL) is an important technique in ensuring network-wide connectivity and security. As cloud-scale WANs today constantly evolve in size and complexity, in-network ACL rules are becoming increasingly more complex. This presents a great challenge to the updating process of ACL configurations: network operators are frequently required to update "tangled" ACL rules across thousands of devices to meet diverse business requirements, and even a single ACL misconfiguration may lead to network disruptions. Such increasing challenges call for an automated system to improve the efficiency and correctness of ACL updates. This paper presents Jinjing, a system that aids Alibaba’s network operators in automatically and correctly updating ACL configurations in Alibaba’s global WAN. Jinjing allows the operators to express in a declarative language, named LAI, their update intent (e.g., ACL migration and traffic control). Then, Jinjing automatically synthesizes ACL update plans that satisfy their intent. At the heart of Jinjing, we develop a set of novel verification and synthesis techniques to rigorously guarantee the correctness of update plans. In Alibaba, our operators have used Jinjing to efficiently update their ACLs and have thus prevented significant service downtime.

       

  • Formal Specification and Testing of QUIC

    Kennneth L McMillan (Microsoft Research), Lenore D Zuck (UIC)

    • Abstract: QUIC is a new Internet secure transport protocol currently in the process of IETF standardization. It is intended as a replacement for the TLS/TCP stack and will be the basis of HTTP/3, the next official version of the hypertext transfer protocol. As a result, it is likely in the near future to carry a substantial fraction of traffic on the Internet. In this case study, we describe our experience applying a methodology of compositional specification-based testing to QUIC. We develop a formal specification of the wire protocol, and use this specification to generate automated randomized testers for implementation of QUIC. The testers effectively take one role of the QUIC protocl, interacting with the other role to generate full protocol executions, and verifying that the implementations conform to the forma specification. This form of testing generates significantly more diverse stimulus and a stronger correctness criterion than interoperability testing, the primary method used to date to validate QUIC and its implementations. As a result, numerous implementation errors have been found. These include some vulnerabilities at the protocol and implementation levels, including an off-path denial of service scenario and an information leak similar to the "heartbleed" vulnerability in OpenSSL.

       

  • 2:45 pm - 3:15 pm Tea/Coffee Break

  • 3:15 pm - 4:55 pm Poster, Demo & CCR papers

  • 4:55 pm - 6:00 pm Business meeting

  • 6:30 pm - 9:00 pm Banquet

    Location: Royal Cuisine Museum

  • Thursday, August 22, 2019

  • 8:00 am - 5:00 pm Start Registration

  • 9:00 am - 10:15 am Technical Session 6: New Ways to Solve Old Problems

  • Leveraging Quantum Annealing for Large MIMO Processing in Centralized Radio Access Networks

    Minsung Kim (Princeton University), Davide Venturelli (USRA Research Institute for Advanced Computer Science), Kyle Jamieson (Princeton Univeristy)

    • Abstract: User demand for increasing amounts of wireless capacity continues to outpace supply, and so to meet this demand, significant progress has been made in new MIMO wireless physical layer techniques. Higher-performance systems now remain impractical largely only because their algorithms are extremely computationally demanding. For optimal performance, an amount of computation that increases at an exponential rate both with the number of users and with the data rate of each user is often required. The base station's computational capacity is thus becoming one of the key limiting factors on wireless capacity. QuAMax is the first large MIMO cloud-based radio access network design to address this issue by leveraging quantum annealing on the problem. We have implemented QuAMax on the 2,031 qubit D-Wave 2000Q quantum annealer, the state-of-the-art in the field. Our experimental results evaluate that implementation on real and synthetic MIMO channel traces, showing that 30 μs of compute time on the 2000Q can enable 48 user, 48 AP antenna BPSK communication at 20 dB SNR with a bit error rate of 10-6 and a 1,500 byte frame error rate of 10-4.

       

  • Neural Packet Classification

    Eric Liang (UC Berkeley), Hang Zhu, Xin Jin (Johns Hopkins University), Ion Stoica (UC Berkeley)

    • Abstract: Packet classification is a fundamental problem in computer networking. This problem exposes a hard tradeoff between the computation and state complexity, which makes it particularly challenging. To navigate this tradeoff, existing solutions rely on complex hand-tuned heuristics, which are brittle and hard to optimize. In this paper, we propose a deep reinforcement learning (RL) approach to solve the packet classification problem. There are several characteristics that make this problem a good fit for Deep RL. First, many of the existing solutions are iteratively building a decision tree by splitting nodes in the tree. Second, the effects of these actions (e.g., splitting nodes) can only be evaluated once we are done with building the tree. These two characteristics are naturally captured by the ability of RL to take actions that have sparse and delayed rewards. Third, it is computationally efficient to generate data traces and evaluate decision trees, which alleviate the notoriously high sample complexity problem of Deep RL algorithms. Our solution, NeuroCuts, uses succinct representations to encode state and action space, and efficiently explore candidate decision trees to optimize for a global objective. It produces compact decision trees optimized for a specific set of rules and a given performance metric, such as classification time, memory footprint, or a combination of the two. Evaluation on ClassBench shows that NeuroCuts outperforms existing hand-crafted algorithms in classification time by 18% at the median, and reduces both time and memory footprint by up to 3×.

       

  • Learning Scheduling Algorithms for Data Processing Clusters

    Hongzi Mao, Malte Schwarzkopf, Shaileshh Bojja Venkatakrishnan (MIT CSAIL), Zili Meng (Tsinghua University), Mohammad Alizadeh (MIT CSAIL)

    • Abstract: Efficiently scheduling data processing jobs on distributed compute clusters requires complex algorithms. Current systems use simple, generalized heuristics and ignore workload characteristics, since developing and tuning a scheduling policy for each workload is infeasible. In this paper, we show that modern machine learning techniques can generate highly-efficient policies automatically. Decima uses reinforcement learning (RL) and neural networks to learn workload-specific scheduling algorithms without any human instruction beyond a high-level objective, such as minimizing average job completion time. However, off-the-shelf RL techniques cannot handle the complexity and scale of the scheduling problem. To build Decima, we had to develop new representations for jobs' dependency graphs, design scalable RL models, and invent RL training methods for dealing with continuous stochastic job arrivals. Our prototype integration with Spark on a 25-node cluster shows that Decima improves average job completion time by at least 21% over hand-tuned scheduling heuristics, achieving up to 2x improvement during periods of high cluster load.

       

  • 10:15 am - 10:45 am Tea/Coffee Break

  • 10:45 am - 12:00 pm Technical Session 7: Applications

  • E2E: Embracing User Heterogeneity to ImproveQuality of Experience on the Web

    Xu Zhang (The University of Chicago), Siddhartha Sen (Microsoft Research), Daniar Kurniawan, Haryadi Gunawi, Junchen Jiang (The University of Chicago)

    • Abstract: Conventional wisdom states that to improve quality of experience (QoE), web service providers should reduce the median or other percentiles of server-side delays. This work shows that doing so can be inefficient due to user heterogeneity in how the delays impact QoE. From the of QoE, the sensitivity of a request to delays can vary greatly even among identical requests arriving at the service, because they differ in the wide-area network latency experienced prior to arriving at the service. In other words, saving 50ms of server-side delay affects different users differently. This paper presents E2E, the first resource allocation system that embraces user heterogeneity to allocate server-side resources in a QoE-aware manner. Exploiting this heterogeneity faces a unique challenge: unlike other application-level properties of a web request (e.g., a user's subscription type), the QoE sensitivity of a request to server-side delays cannot be pre-determined, as it depends on the delays themselves, which are determined by the resource allocation decisions and the incoming requests. This circular dependence makes the problem computationally difficult. We make three contributions: (1) a case for exploiting user heterogeneity to improve QoE, based on end-to-end traces from Microsoft's cloud-scale production web framework, as well as a user study on Amazon MTurk; (2) a novel resource allocation policy that addresses the circular dependence mentioned above; and (3) an efficient system implementation with almost negligible overhead. We applied E2E to two open-source systems: replica selection in Cassandra and message scheduling in RabbitMQ. Using traces and our testbed deployments, we show that E2E can increase QoE (e.g., duration of user engagement) by 28%, or serve 40% more concurrent requests without any drop in QoE.

       

  • Graphene: Efficient Interactive Set Reconciliation Applied to Blockchain Propagation

    A. Pinar Ozisik, Brian N. Levine, George Bissias (University of Massachusetts Amherst), Gavin Andresen, Darren Tapp (Dash.org), Sunny Katkuri (University of Massachusetts Amherst)

    • Abstract: We introduce Graphene, a method and protocol for interactive set reconciliation among peers in blockchains and related distributed systems. Through the novel combination of a Bloom filter and an Invertible Bloom Lookup Table (IBLT), Graphene uses a fraction of the network bandwidth used by deployed work for one- and two-way synchronization. We show that, for this specific problem, Graphene is more efficient at reconciling n items than using a Bloom filter at the information theoretic bound. We contribute a fast and implementation-independent algorithm for parameterizing an IBLT so that it is optimally small in size and meets a desired decode rate with arbitrarily high probability. We characterize our performance improvements through analysis, detailed simulation, and deployment results for Bitcoin Cash, a prominent cryptocurrency. Our implementations of Graphene, IBLTs, and our IBLT optimization algorithm are all open-source code.

       

  • Offloading Distributed Applications onto SmartNICs using iPipe

    Ming Liu, Tianyi Cui, Henry N. Schuh, Arvind Krishnamurthy (University of Washington), Simon Peter (The University of Texas at Austin), Karan Gupta (Nutanix)

    • Abstract: Emerging Multicore SoC SmartNICs, enclosing rich computing, resources (e.g., a multicore processor, onboard DRAM, accelerators, programmable DMA engines), hold the potential to offload generic datacenter server tasks. However, it is unclear how to use a SmartNIC efficiently and maximize the offloading benefits, especially for distributed applications. Towards this end, we characterize four commodity SmartNICs and summarize the offloading performance implications from four perspectives: traffic control, computing capability, onboard memory, and host communication. Based on our characterization, we build iPipe, an actor-based framework for offloading distributed applications onto SmartNICs. At the core of iPipe is a hybrid scheduler, combining FCFS and DRR-based processor sharing, which can tolerate tasks with variable execution costs and maximize NIC compute utilization. Using iPipe, we build a real-time data analytics engine, a distributed transaction system, and a replicated key-value store, and evaluate them on commodity SmartNICs. Our evaluations show that when processing 10/25Gbps of application bandwidth, NIC-side offloading can save up to 3.1/2.2 beefy Intel cores and lower application latencies by 23.0/28.0 μs.

       

  • 12:00 pm - 1:30 pm Lunch

    Location: Valley Wing Ballroom A+B, Valley Wing Ballroom

  • 1:30 pm - 2:45 pm Technical Session 8: NICs and Switches

  • NitroSketch: Robust and General Sketch-based Monitoring in Software Switches

    Zaoxing Liu (Carnegie Mellon University), Ran Ben Basat (Harvard University), Gil Einziger (Ben-Gurion University of the Negev), Yaron Kassner (Technion), Vladimir Braverman (Johns Hopkins University), Roy Friedman (Technion), Vyas Sekar (Carnegie Mellon University)

    • Abstract: Software switches are emerging as a vital measurement vantage point in many networked systems. Sketching algorithms or sketches, provide high-fidelity approximate measurements, and appear as a promising alternative to traditional approaches such as packet sampling. However, sketches incur significant computation overhead in software switches. Existing efforts in implementing sketches in virtual switches make sacrifices on one or more of the following dimensions: performance (handling 40 Gbps line-rate packet throughput with low CPU footprint), robustness (accuracy guarantees across diverse workloads), and generality (supporting various measurement tasks). In this work, we present the design and implementation of NitroSketch, a sketching framework that systematically addresses the performance bottlenecks of sketches without sacrificing robustness and generality. Our key contribution is the careful synthesis of rigorous, yet practical solutions to reduce the number of per-packetCPU and memory operations. We implement NitroSketch on three popular software platforms (Open vSwitch-DPDK, FD.io-VPP, and BESS) and evaluate the performance. We show that accuracy is comparable to unmodified sketches while attaining up to two orders of magnitude speedup, and up to 45% reduction in CPU usage.

       

  • PicNIC: Predictable Virtualized NIC

    Praveen Kumar (Cornell University), Nandita Dukkipati, Nathan Lewis, Yi Cui, Yaogong Wang, Chonggang Li, Valas Valancius, Jake Adriaens, Steve Gribble (Google), Nate Foster (Cornell University), Amin Vahdat (Google)

    • Abstract: Network virtualization stacks are the linchpins of public clouds. A key goal is to provide performance isolation so that workloads on one Virtual Machine (VM) do not adversely impact the network experience of another VM. Using data from a major public cloud provider, we systematically characterize how performance isolation can break in current virtualization stacks and find a fundamental tradeoff between isolation and resource multiplexing for efficiency. In order to provide predictable performance, we propose a new system called PicNIC that shares resources efficiently in the common case while rapidly reacting to ensure isolation. PicNIC builds on three constructs to quickly detect isolation breakdown and to enforce it when necessary: CPU-fair weighted fair queues at receivers, receiver-driven congestion control for backpressure, and sender-side admission control with shaping. Based on an extensive evaluation, we show that this combination ensures isolation for VMs at sub-millisecond timescales with negligible overhead.

       

  • Fast, Scalable, and Programmable Packet Scheduler in Hardware

    Vishal Shrivastav (Cornell University)

    • Abstract: With increasing link speeds and slowdown in the scaling of CPU speeds, packet scheduling in software is resulting in lower precision and higher CPU utilization. By offloading packet scheduling to the hardware such as a NIC, one can potentially overcome these drawbacks. However, to retain the flexibility of software packet schedulers, packet scheduler in hardware must be programmable, while also being fast and scalable. State-of-the-art packet schedulers in hardware either compromise on scalability (Push-In-First-Out (PIFO)) or the ability to express a wide range of packet scheduling algorithms (First-In-First-Out (FIFO)). Further, even a general scheduling primitive like PIFO is not expressive enough to express certain key classes of packet scheduling algorithms. Hence in this paper, we propose a generalization of the PIFO primitive, called Push-In-Extract-Out (PIEO), which like PIFO, maintains an ordered list of elements, but unlike PIFO which only allows dequeue from the head of the list, PIEO allows dequeue from arbitrary positions in the list by supporting a programmable predicate-based filtering at dequeue. Next, we present a fast and scalable hardware design of PIEO scheduler and prototype it on a FPGA. Overall, PIEO scheduler is both more expressive and over 30x more scalable than PIFO.

       

  • 2:45 pm - 3:15 pm Tea/Coffee Break

  • 3:15 pm - 4:30 pm Technical Session 9: Video

  • Vantage: Optimizing video upload for time-shifted viewing of social live streams

    Devdeep Ray, Jack Kosaian, K. V. Rashmi, Srinivasan Seshan (Carnegie Mellon University)

    • Abstract: Social live video streaming (SLVS) applications are becoming increasingly popular with the rise of platforms such as Facebook-Live, YouTube-Live, Twitch and Periscope. A key characteristic that differentiates this new class of applications from traditional live streaming is that these live streams are watched by viewers at different delays; while some viewers watch a live stream in real-time, others view the content in a time-shifted manner at different delays. In the presence of variability in the upload bandwidth, which is typical in mobile environments, existing solutions silo viewers into either receiving low latency video at a lower quality or a higher quality video with a significant delay penalty, without accounting for the presence of diverse time-shifted viewers. In this paper, we present Vantage, a live-streaming upload solution that improves the overall quality of experience for diverse time-shifted viewers by using selective quality-enhancing retransmissions in addition to real-time frames, optimizing the encoding schedules to balance the allocation of the available bandwidth be-tween the two. Our evaluation using real-world mobile network traces shows that Vantage can provide high quality simultaneously for both low-latency and delayed viewing. For delayed viewing, Vantage achieves an average improvement of 19.9% over real-time optimized video streaming techniques across all the network traces and test videos, with observed gains of up to 42.9%. These benefits come at the cost of an average drop in real-time quality of 3.3%, with a maximum drop of 7.1%. This represents a significant performance improvement over current techniques used for SLVS applications, which primarily optimize the video upload for real-time viewing.

       

  • Pano: Optimizing 360 Video Streaming with a Better Understanding of Quality Perception

    Yu Guan, Chengyuan Zheng, Xinggong Zhang, Zongming Guo (Peking University), Junchen Jiang (The University of Chicago)

    • Abstract: 360° video streaming has gained tremendous popularity, but achieving high video quality requires much more bandwidth than non-360° videos. The reason is that current solutions assume users perceive the quality of 360° videos in the same way they perceive that of non-360° videos. However, our analysis reveals several quality determining factors unique to 360° videos, including the moving speed of user's viewpoint (center of user's field of view), the recent change of video luminance, and the difference in depth-of-fields of visual objects close to the viewpoint. This paper presents Pano, a 360° video streaming system that leverages these 360° video-specific factors. We make three technical contributions. (1) We build a 360° video quality perception model to systematically capture the impact of these 360° video-specific factors. (2) Pano replaces the grid-based spatial tiling by a variable sized tiling scheme, which strikes a better balance between the perceived quality and video encoding efficiency. (3) Pano uses a new quality-adaptation scheme that maximizes the user-perceived quality and is readily deployable in the current video delivery architecture. Our evaluation (based on user study and trace analysis) shows that compared with state-of-the-art techniques, Pano can save 41-46% bandwidth without any drop in the perceived quality, or it can raise the perceived quality (user rating) by 25%-142% without using more bandwidth.

       

  • End-to-End Transport for Video QOE Fairness

    Vikram Nathan, Vibhaalakshmi Sivaraman, Ravichandra Addanki, Mehrdad Khani, Prateesh Goyal, Mohammad Alizadeh (MIT CSAIL)

    • Abstract: The growth of video traffic makes it increasingly likely that multiple clients share a bottleneck link, giving video content providers an opportunity to optimize the experience of multiple users jointly. But today’s transport protocols are oblivious to video streaming applications and provide only connection-level fairness. We design and build Minerva, the first end-to-end transport protocol for multi-user video streaming. Minerva uses information about the player state and video characteristics to adjust its congestion control behavior to optimize for QoE fairness. Minerva clients receive no explicit information about other video clients, yet when multiple of them share a bottleneck link, their rates converge to a bandwidth allocation that maximizes QoE fairness. At the same time, Minerva videos occupy only their fair share of the bottleneck link bandwidth, competing fairly with existing TCP traffic. We implement Minerva on an industry standard video player and server and show that, compared to Cubic and BBR, 15-32% of the videos using Minerva experience an improvement in viewing experience equivalent to switching from 720p to 1080p.

       

  • 4:30 pm - 5:45 pm Technical Session 10: New Control Plane Operations

  • Towards Highly Available Clos-Based WAN Routers

    Sucha Supittayapornpong, Barath Raghavan, Ramesh Govindan (University of Southern California)

    • Abstract: The performance and availability of cloud and content providers often depends on the wide area networks (WANs) they use to interconnect their datacenters. WAN routers, which connect to each other using trunks (bundles of links), are sometimes built using an internal Clos topology connecting merchant-silicon switches. As such, these routers are susceptible to internal link and switch failures, resulting in reduced capacity and low availability. Based on the observation that today's WAN routers use relatively simple trunk wiring and routing techniques, we explore the design of novel wiring and more sophisticated routing techniques to increase failure resilience. Specifically, we describe techniques to 1) optimize trunk wiring to increase effective internal router capacity so as to be resilient to internal failures, 2) compute the effective capacity under different failure patterns, and 3) use these to compute compact routing tables under different failure patterns, since switches have limited routing table sizes. Our evaluations show that our approach can mask failures of up to 75% of switches in some cases without exceeding routing table limits, whereas competing techniques can sometimes lose half of a WAN router's capacity with a single failure.

       

  • On Optimal Neighbor Discovery

    Philipp H. Kindt, Samarjit Chakraborty (Technical University of Munich (TUM))

    • Abstract: Mobile devices apply neighbor discovery (ND) protocols to wirelessly initiate a first contact within the shortest possible amount of time and with minimal energy consumption. For this purpose, over the last decade, a vast number of ND protocols have been proposed, which have progressively reduced the relation between the time within which discovery is guaranteed and the energy consumption. In spite of the simplicity of the problem statement, even after more than 10 years of research on this specific topic, new solutions are still proposed even today. Despite the large number of known ND protocols, given an energy budget, what is the best achievable latency still remains unclear. This paper addresses this question and for the first time presents safe and tight, duty-cycle-dependent bounds on the worst-case discovery latency that no ND protocol can beat. Surprisingly, several existing protocols are indeed optimal, which has not been known until now. We conclude that there is no further potential to improve the relation between latency and duty-cycle, but future ND protocols can improve their robustness against beacon collisions.

       

  • Elmo: Source Routed Multicast for Public Clouds

    Muhammad Shahbaz (Stanford University), Lalith Suresh (VMware), Jennifer Rexford (Princeton University), Nick Feamster (Princeton University), Ori Rottenstreich (Technion), Mukesh Hira (VMware)

    • Abstract: We present Elmo, a system that addresses the multicast scalability problem in multi-tenant data centers. Modern cloud applications frequently exhibit one-to-many communication patterns and, at the same time, require sub-millisecond latencies and high throughput. IP multicast can achieve these requirements but has control- and data-plane scalability limitations that make it challenging to offer it as a service for hundreds of thousands of tenants, typical of cloud environments. Tenants, therefore, must rely on unicast-based approaches (e.g., application-layer or overlay-based) to support multicast in their applications, imposing bandwidth and end-host CPU overheads, with higher and unpredictable latencies. Elmo scales network multicast by taking advantage of emerging programmable switches and the unique characteristics of data-center networks; specifically, the hypervisor switches, symmetric topology, and short paths in a data center. Elmo encodes multicast group information inside packets themselves, reducing the need to store the same information in network switches. In a three-tier data-center topology with 27,000 hosts, Elmo supports a million multicast groups using an average packet-header size of 114 bytes, requiring as few as 1,100 multicast group-table entries on average in leaf switches, and having a traffic overhead as low as 5% over ideal multicast.

       

  • 5:45 pm - 6:05 pm Closing

  • 6:30 pm - 9:00 pm N2Women Dinner [click here for more information]

    Location: Garden Wing Ballroom I + II

  • Friday, August 23, 2019

  • 8:00 am - 5:00 pm Start Registration

  • 8:30 am - 10:00 am Workshops and Tutorials: Early Morning Session

  • Workshops: NetPL and NetAI

  • Tutorials: P4 and CPS

  • 10:00 am - 10:30 am Tea/Coffee Break

  • 10:30 am - 12:00 pm Workshops and Tutorials: Late Morning Session

  • Workshops: NetPL and NetAI

  • Tutorials: P4 and CPS

  • 12:00 pm - 1:30 pm Lunch

    Location: Valley Wing Ballroom A+B, Valley Wing Ballroom

  • 1:30 pm - 3:00 pm Workshops and Tutorials: Early Afternoon Session

  • Workshops: NetPL and NetAI

  • Tutorial: P4

  • 3:00 pm - 3:30 pm Tea/Coffee Break

  • 3:30 pm - 5:00 pm Workshops and Tutorials: Late Afternoon Session

  • Workshops: NetPL and NetAI

  • Tutorial: P4

The final program may be adjusted.