ACM SIGCOMM 2022, Amsterdam, The Netherlands
MENU

The 2nd ACM SIGCOMM Workshop on 5G and Beyond Network Measurements, Modeling, and Use Cases (5G-MeMU)

For online participation, please join this Zoom meeting. The workshop also has an associated Slack channel sigcomm22-5gmemu for discussions.

Workshop Program

  • Monday, August 22, 2022 CEST (Room: Veilingzaal)

  • 09:00 am - 09:50 am CEST      Keynote/Panel

  • 09:00 am - 09:50 am CEST

    Keynote: How can we help you? Use cases for relay-based services in 5G

    Mirja Kühlewind (Master Researcher at Ericsson Research, Herzogenrath, Germany and Chair of the Internet Architecture Board (IAB) in the IETF)

    • Abstract: Encryption is on the rise making the Internet more secure every day. Encryption also reinforces layer boundaries and increases the control about data distribution. This shifts the network paradigm from one extreme, where intermediates nodes interfere with protocol mechanisms and payload without consent by the user or application, to another extreme, where the network may not be able to provide any performance enhancing services at all. However, there is a middle ground when in-network relay services act as an endpoint in a tunnel-based setup. In such a setup, relays can explicitly provide requested services or both ends can exchange data that can be used to help trouble-shooting or increase security. This talk will survey new opportunities for collaboration in 5G networks using a relay-based approach.

       

  • 09:50 am - 10:30 am CEST      Session I: 5G Performance Analysis in the Wild

    Session chair: Ying Wang (Stevens Institute of Technology)
  • An In-Depth Study of Uplink Performance of 5G mmWave Networks

    Moinak Ghoshal (Northeastern University), Z. Jonny Kong, Qiang Xu (Purdue University), Zixiao Lu, Shivang Aggarwal, Imran Khan (Northeastern University), Yuanjie Li (Tsinghua University), Y. Charlie Hu (Purdue University), Dimitrios Koutsonikolas (Northeastern University)

    • Abstract: The highly anticipated 5G mmWave technology promises to enable many uplink-oriented, latency-critical applications (LCAs) such as Augmented Reality and Connected Autonomous Vehicles. Nonetheless, recent measurement studies have largely focused on its downlink performance. In this work, we perform a systematic study of the uplink performance of commercial 5G mmWave networks across 3 major US cities and 2 mobile operators. Our study makes three contributions. (1) It reveals that 5G mmWave uplink performance is geographically diverse, substantially higher over LTE in terms of bandwidth and latency, but often erratic and suboptimal, which can degrade LCA performance. (2) Our analysis of control messages and PHY-level KPIs shows that the root causes for the suboptimal performance are fundamental to 5G mmWave and cannot be easily fixed via simple tuning of network configurations. (3) We identify various design and deployment optimizations that 5G operators can explore to bring 5G mmWave performance to the level needed to ultimately support the LCAs.

       

  • Implications of Handover Events in commercial 5G Non-Standalone Deployments in Rome

    Konstantinos Kousias (Simula Research Laboratory), Mohammad Rajiullah (Karlstad University), Giuseppe Caso (Ericsson), Ozgu Alay (University of Oslo, Karlstad University), Anna Brunstrom (Karlstad University), Luca De Nardis (Sapienza University of Rome), Marco Neri (Rohde & Schwarz), Usman Ali, Maria Gabriella Di Benedetto (Sapienza University of Rome)

    • Abstract: Fifth Generation (5G) cellular networks are gaining popularity and are soon expected to become the norm in the global telecommunications industry. In this paper, we follow an experimental approach to study key aspects of Handover (HO) events in commercial 5G Non Standalone (NSA) deployments. We collect our dataset by conducting a large-scale measurement campaign in the city of Rome, Italy. With our analysis, we aim to first, empirically evaluate the current HO strategies followed in 5G NSA, and second, to show the impact of HOs on end-user application performance. Our results show an increased number of intra-RAT HOs, i.e., HOs between cells with the same Radio Access Technology (RAT), in situation where signal quality conditions between multiple candidate cells is on par. In addition, we show that inter-RAT HOs, i.e., HOs between cells with different RATs, can be detrimental to the application performance.

       

  • 10:30 am - 11:00 am CEST Break

  • 11:00 am - 12:30 pm CEST      Session II: 5G System Design and Analysis

    Session chair: Ozgu Alay (University of Oslo)
  • Performance of QUIC congestion control algorithms in 5G networks

    Habtegebreil Haile, Karl-Johan Grinnemo (Karlstad University), Simone Ferlin (Ericsson), Per Hurtig, Anna Brunstrom (Karlstad University)

    • Abstract: The deployment of the next generation of cellular networks (5G) is expanding to enable new services and improve the quality of existing ones. Despite the benefits of 5G networks, they also present new challenges for the performance of classical and recently-proposed congestion control algorithms (CCAs), e.g., Bottleneck bandwidth and round-trip propagation time (BBR) and Copa. Despite their successful adoption in the Internet, BBR has been shown to overestimate the bottleneck bandwidth in cellular networks, and Copa has not been independently tested on a similar scale and detail as BBR in cellular networks. In this work, we compare the performance of these fairly recent CCAs as well as the widely deployed CUBIC CCA, and a modification to BBR for cellular networks (RBBR) at 5G rates. The evaluation is performed using the emerging QUIC protocol and uses both emulations and live experiments. Our results show that in 5G networks, CUBIC, BBR, and Copa suffer from significant bufferbloat, longer packet delays, and lower throughput, respectively. We also observe that in cases where the bottleneck is largely in the 5G link, RBBR can offer a significant delay reduction compared to BBR and CUBIC.

       

  • Slicing Networks with P4 Hardware and Software Targets

    Eric Hauser, Manuel Simon, Henning Stubbe, Sebastian Gallenmüller, Georg Carle (Technical University of Munich)

    • Abstract: Running applications over a shared network may lead to unwanted impairments or performance impacts. To avoid these effects, the partitioning of network resources is an integral aspect of effective 5G networks. These virtually partitioned networks or slices allow the provisioning of network resources to guarantee a specific service quality to dedicated virtual networks. Programmable network devices, pushed by new languages such as P4, with their intrinsic flexibility, present themselves as a promising technique to realize slicing. This paper explores three approaches to network slicing and their respective implementation on a P4 soft- and hardware network device. We focus our effort on investigating P4 primitives that do not require the features of a specific P4 device but are available across different P4 targets. Based on our findings, we provide target-specific guidelines minimizing the impact of P4-based slicing for software and hardware targets alike.

       

  • Bandwidth Resource Allocation in Integrated Access and Backhaul Networks

    Dor Harris, Danny Raz, Pe'er Sagiv (Technion)

    • Abstract: One of the promising technologies that allows currently deployed 5G and the anticipated 6G networks to cope with the ever increasing demand for high throughput low latency data services is Integrated Access and Backhaul. Self driving cars, augmented reality games and large scale data streaming are simple examples of new applications that require a large amount of low latency traffic. Integrated Access and Backhaul can provide better service to such applications by extending the traditional cellular access and combining access and backhaul resources. However, the actual performance gain depends on the specific allocation of the radio resources. In this paper we address this challenge and study new ways to allocate bandwidth across such networks. We formulate the IAB Resource Allocation Problem (IABrap) and provide a novel approximation algorithm with guaranteed performance to solve it. We also study a new, ML method, that is based on applying GNN (Graph Neural Network) to this problem. We evaluate the expected performance of both methods in realistic scenarios using a self-developed network simulator. Our results indicate that combining traditional algorithmic techniques with state of art ML can provide better practical algorithms.

       

  • Prediction and Exposure of Delays from a Base Station Perspective in 5G and Beyond Networks

    Akhila Rao (RISE Research Institutes of Sweden, KTH Royal Institute of Technology), Lorenzo Corneo (Aalto University), Aleksandr Zavodovski (Max Planck Institute for Informatics), William Tärneberg, Emma Fitzgerald (Lund University), Caner Kilinc (Ericsson, University of Edinburgh), Omkar Rai, Sixten Johansson, Viktor Berggren, Hassam Riaz (Ericsson), Andreas Johnsson (Ericsson, Uppsala University)

    • Abstract: The inherent flexibility of 5G networks come with a high degree of configuration and management complexity. This makes the performance outcome for UEs, more than ever, dependent on intricate configurations and interplay between algorithms at various network components. In this paper, we take initial steps towards a performance exposure system at the base station using a data-driven approach for predicting performance violations in terms of RTT, as observed by the UE, in a 5G mmWave network. We present ML models to predict RTT using low-level and high-frequency base station metrics from a 5G mmWave testbed based on commercially available equipment. Predicting UE performance from a base station perspective, and exposing this knowledge, is valuable for applications to proactively address performance violations. We also compare several methods for feature reduction, which have a significant impact on monitoring load. We demonstrate our model's ability to identify RTT violations, paving the way for network providers towards an intelligent performance exposure system.

       

  • 5GPerf: Profiling Open Source 5G RAN Components Under Different Architectural Deployments

    Cuidi Wei (The George Washington University), Ahan Kak (Nokia Bell Labs), Nakjung Choi (Nokia Bell Labs), Timothy Wood (The George Washington University)

    • Abstract: The 5G Radio Access Network (RAN) is composed of complicated software components that handle packet processing from the radio layer up. Recently, several open source platforms for 5G RAN development and prototyping have emerged. In this paper we perform performance analyses and CPU profiling to understand the cost breakdown for different 5G RAN components and the impact of different deployment configurations. Our results provide insight into the overheads introduced by each protocol layer and deployment architectures, and can help guide future research on 5G base station design and acceleration.

       

Call for Papers

Commercial 5G networks are being quickly rolled out worldwide. In theory, millimeter wave (mmWave) 5G can support network throughput of up to 20 Gbps, a 100× improvement compared to today’s 4G LTE. 5G and beyond networks enable the development of new services that require ultra-high bandwidth and/or low latency. Examples include edge-assisted machine learning, networked virtual reality and augmented reality, collaborative and autonomous vehicles, low-latency Internet of things (IoT) applications, and data-intensive sensing, to name a few. Despite the potential of 5G and beyond technologies, the validation of 5G performance in operational systems and a complete understanding of the impact of such technologies on various vertical use cases remain primarily open. The research community also faces several major challenges to conducting research on 5G and beyond networks, and leveraging 5G’s infrastructure to support the development and deployment of research prototypes.

The key issues include heterogeneity in both 5G and beyond technologies and service requirements, the inaccessibility and closedness of current commercial 5G networks, and a lack of software infrastructures such as tools and models that facilitate 5G and beyond enabled research, system prototyping, and experimentation. (1) 5G technologies and performance are heterogeneous. Unlike its 4G predecessors, 5G encompasses more diverse technologies such as mmWave high-band vs. mid-band vs. sub-6G low-band radio spectrum. This poses challenges in designing systems and services that can quickly adapt to changing 5G performance. The needs and requirements posed by various 5G and beyond use cases are also highly heterogeneous. This poses challenges for 5G deployments to meet and validate diverse service requirements end-to-end. (2) 5G’s deployment takes time. Right now, mmWave 5G is only available in a few major cities. The supporting infrastructures such as 5G edge computing platforms have registered even less deployment. This hinders researchers, in particular those in less populated areas, from accessing 5G. Furthermore, the commercial 5G ecosystem is closed. This makes it difficult to access many types of information such as cellular control-plane messages, device radio energy consumption, and base station resource allocation status. (3) There is still a gap between high demand and current development for open programmable 5G and beyond software stacks to enable prototyping and experimentation. Considering the current experiences and expectations from various vertical domains, the software stacks that support building 5G research prototypes as well as traffic models and tools that capture vertical’s performance are insufficient.

The 5G-MeMU workshop is motivated by the above key challenges of understanding and optimizing operational 5G and beyond systems and services and conducting 5G-related research. Its goal is to bring together researchers, cellular network operators, equipment vendors, mobile device manufacturers, vertical use case owners, and policymakers, from academia, industry, and government for discussion of the challenges of the 5G ecosystem, centered on practical experiences with 5G and beyond systems and services, and the state-of-the-art 5G and beyond research. This workshop solicits novel contributions to the state-of-the-art, results of ongoing research, open issues, trends, and new ideas.

Topics of Interest

Topics of interest include, but are not limited to:

  • Application-Level Performance Evaluation of 5G Networks and 5G slices
  • End-to-End Measurement and Validation Frameworks for 5G KPIs
  • Crowd-Sourced Data Collection Services for Commercial 5G
  • Deployment of Massive Scale IoT Systems on 5G and beyond
  • Energy Consumption Analysis and Modeling of 5G and beyond Radio
  • Experiences on Designing, Implementing, and Deploying 5G and beyond Testbeds
  • Exploration of Open Architectures for 5G and beyond RAN and Core Networks
  • Machine Learning for Analyzing and Predicting 5G Traffic
  • Measurement-Driven and ML/AI Approaches for 5G and beyond Design and Optimization
  • Metrics, Tools, and Testbeds for 5G and beyond Edge
  • Performance Evaluation and Optimization of 5G and beyond Multi-Access and Multi-Connectivity Solutions
  • Traffic Models, Tools, and Performance Evaluation of Industry Verticals (eHealth, Transportation, Automation, Smart Cities, etc.) in 5G Networks
  • Network Friendly Monitoring of Commercial 5G Networks
  • Novel 5G-Aware Networked Systems and Applications
  • Security Analysis of 5G and beyond Systems and Applications
  • Techniques for Improving Reproducibility in 5G Measurements

Submission Instructions

Submissions must be original, unpublished work, and not under consideration at another conference or journal. Submitted papers must be at most six (6) pages long, excluding references and appendices, in two-column 10pt ACM format. Papers must include author names and affiliations for single-blind peer reviewing by the PC. Authors of accepted submissions are expected to present and discuss their work at the workshop.

Please submit your paper via https://memu-5g22.hotcrp.com.

Important Dates

  • May 11, 2022 May 25, 2022

    Submission deadline

  • June 17, 2022

    Acceptance notification

  • July 1, 2022

    Camera-ready deadline

Organizers

  • Steering Committee
  • Anna Brunstrom

    Karlstad University, Sweden

  • Bo Han

    George Mason University, US

  • Program Co-Chairs
  • Ozgu Alay

    University of Oslo, Norway

  • Ying Wang

    Stevens Institute of Technology, US

  • Technical Program Committee
  • Andra Lutu

    Telefonica, Spain

  • Aloizio Pereira da Silva

    Virginia Tech, US

  • Bei Xie

    McNeese State University, US

  • Bo Yu

    George Mason University, US

  • Carla Fabiana Chiasserini

    Politecnico di Torino, Italy

  • Chi-Yu Li

    National Chiao Tung University, Taiwan

  • Daniel Jakubisin

    Virginia Tech, US

  • Georgios Gardikis

    Space Hellas, Cyprus

  • Giuseppe Caso

    Ericsson, Sweden

  • Gyan Ranjan

    Ericsson, US

  • Harilaos Koumaras

    NCSR Demokritos, Greece

  • Hongyi Wu

    Old Dominion University, US

  • Pedro Merino

    University of Malaga, Spain

  • Shucheng Yu

    Stevens Institute of Technology, US

  • Song Liu

    Google, US

  • Valerio Frascolla

    Intel, Germany

  • Victor Lawrence

    Stevens Institute of Technology, US

  • Vincenzo Mancuso

    IMDEA Networks, Spain

  • Xuan Tuyen Tran

    AT&T Labs, US

  • Yang Zhang

    Google, US

  • Yanjun Pan

    University of Arkansas, US

  • Yuanjie Li

    Tsinghua University, China