ACM SIGCOMM 2021, virtually (online)
MENU

ACM SIGCOMM 2021 TUTORIAL: Supporting Replicable Networking Experiments with TriScale

Tutorial Program

The tutorial has an associated Slack channel for discussions. Click on the link below to visit it. If you're asked to sign in, use the workspace name "sigcomm.slack.com" to sign up or sign in.

Go to Tutorial Slack channel
  • Monday, August 23th 10:00-13:00 (UTC-4, New York), 16:00-19:00 (UTC+2, Paris)

  • 10:00 - 11:00 Session I

  • 10:00 - 11:00

    Introductory lecture

    • Motivation: Why should we care about replicability? Why is it hard to achieve in networking research?
    • Methodological pitfalls: Common mistakes and oversights
    • Overview of TriScale: A concrete instance of a systematic methodology for experiment design and data analysis to support replicability
    • TriScale in action: Highlights from selected networking use cases
  • 11:00 - 11:15 Coffee/tea Break

  • 11:15 - 12:15 Session II

  • 11:15 - 12:15

    Hands-on session

    • Design your (next) experiment with TriScale
    • Analysis of publicly available networking data with TriScale
  • 12:15 - 12:30 Coffee/tea Break

  • 12:30 - 13:00 Session III

  • 12:30 - 13:00

    Closing

    • Perspectives on replicability in networking in general and for TriScale in particular
    • Open discussion and feedback

Call For Participation

When designing their performance evaluations, networking researchers often encounter questions such as: How long should a run be? How many runs to perform? How to account for the variability across multiple runs? What statistical methods should be used to analyze the data? Despite the best intentions, researchers often answer these questions differently, thus impairing the replicability of evaluations and the confidence in the results.

Improving the standards of replicability in networking has recently gained traction within the community. As an important piece of the puzzle, we have developed a systematic methodology that streamlines the design and analysis of performance evaluations, and we have implemented this methodology into a framework called TriScale.

This tutorial will introduce the main concepts of the methodology and let you experiment with TriScale. By the end to the tutorial, you will be able to:

  • Understand the difference between replicability and reproducibility, and why these notions matter;
  • Understand why performance evaluation experiments must be replicable to be meaningful;
  • Understand the basics of statistics required to assess replicability;
  • Answer questions such as "How many time should I repeat my experiment?" rationally;
  • Use the TriScale framework to help you design your next experiments, analyze your data, and report your results in a (more) replicable fashion.

Expectations and Prerequisites

This tutorial is designed for anyone performing experiments which are not purely deterministic; that is, almost anyone likely to attend SIGCOMM! It is particularly recommended for

  • Early career researchers (MSc, PhD, PostDoc), whom we all know are the ones actually designing and running performance evaluations;
  • Supervisors interested in useful resources to train their mentees;
  • Anyone interested in good/better scientific practices.

The TriScale framework is implemented in Python; the essentials will be covered during the tutorial, but some basic knowledge of Python as well as Jupyter notebooks will be beneficial.
Note that no install/setup is required. The tutorial's runs directly from any web-browser.

Organizers

  • Romain Jacob

    ETH Zurich

    • Bio:

      I am postdoctoral researcher at ETH Zurich in the group of Prof. Laurent Vanbever. My current research interests are focused on computer networks, communication protocols, (real-time) scheduling theory, and statistics applied to experimental design.
      I started to work on TriScale from my own need to design sound performance evaluations for low-power wireless communication protocols, which was the topic of my doctoral dissertation. At the time, I have been heavily involved in the IoTBench initiative, which aims at designing benchmarks for low-power wireless; and as we learned then, there can be no proper benchmarking without replicability! So here we are.


  • Marco Zimmerling

    TU Dresden

    • Bio:

      Marco Zimmerling heads the Networked Embedded Systems Lab at TU Dresden, Germany. He holds a PhD in Computer Engineering from ETH Zurich, Switzerland and a Diploma in Computer Science (2009) from TU Dresden. For working on his Diploma thesis project he spent seven months in Sweden, collaborating with RISE SICS and Uppsala University. In 2006 he did a six-month internship at the IBM T.J. Watson Research Center, Hawthorne, USA. His research revolves around embedded systems and wireless networking, with the goal of designing dependable and sustainable networked systems that can power emerging cyber-physical systems and the Internet of Things. His work has been recognized through several awards, including the 2015 ACM SIGBED Paul Caspi Memorial Dissertation Award, the 2016 EDAA Outstanding Dissertation Award, and Best Paper Awards at ACM/IEEE ICCPS 2019, ACM SenSys 2013, and ACM/IEEE IPSN 2011. More info at wwwpub.zih.tu-dresden.de/~mzimmerl


  • Carlo Alberto Boano

    TU Graz

    • Bio:

      Carlo Alberto Boano is an associate professor at the Institute of Technical Informatics of Graz University of Technology, Austria. His research interests encompass the design of dependable networked embedded systems, with emphasis on the efficiency and reliability of low-power wireless communications, on the testing and benchmarking of IoT solutions, as well as on the robustness of IoT networking protocols against environmental influences. Within this area, he has published over 100 peer-reviewed contributions, including papers at prestigious venues ranked CORE A/A*, and received several awards.


  • Laurent Vanbever

    ETH Zurich

    • Bio:

      Laurent Vanbever is an associate professor at ETH Zürich where he started as an assistant professor in 2015. Before that, Laurent was a postdoctoral research associate at Princeton University where he worked with Jennifer Rexford. He obtained his PhD degree in Computer Science from the University of Louvain in 2012. His research focuses on making large-scale network infrastructures more manageable, scalable and, secure. Laurent has won several awards for his research including: two best paper awards at SIGCOMM; the NSDI community award; and six IETF/IRTF Applied Networking Research Prizes.


References

Romain Jacob, Marco Zimmerling, Carlo Alberto Boano, Laurent Vanbever, & Lothar Thiele. (2021). TriScale: A Framework Supporting Replicable Performance Evaluations in Networking (Version 4). Zenodo. DOI:10.5281/zenodo.4596442

TriScale website, triscale.ethz.ch