ACM SIGCOMM 2017, Los Angeles, CA
MENU

ACM SIGCOMM 2017 Reproducibility Workshop (Reproducibility’17)

Workshop Program

  • Friday, August 25, 2017, Optimist Room (Luskin Center)

  • 9:00am - 9:15am Opening Remarks

    Room: Optimist Room (Luskin Center)

  • 9:15am - 10:30am Session 1 - Why is reproducibility so hard?

    Session Chair: TBD

    Room: Optimist Room (Luskin Center)

  • Challenges with Reproducibility

    Vaibhav Bajpai (TU Munich); Mirja Kühlewind (ETH Zürich); Jörg Ott (TU Munich); Jürgen Schönwälder (Jacobs University Bremen); Anna Sperotto (University of Twente); Brian Trammell (ETH Zürich)

    • Abstract: The Computer Science (CS) culture is gentle to accepting papers that are non-reproducible as long as they appear plausible. In this paper, we discuss some of the challenges with reproducibility and a set of recommendations that we as a community can undertake to initiate a cultural change.

       

  • Towards an Ecosystem for Reproducible Research in Computer Networking

    Quirin Scheitle (Technical University of Munich); Matthias Wählisch (Freie Universität Berlin); Oliver Gasser (Technical University of Munich); Thomas C. Schmidt (HAW Hamburg); Georg Carle (Technical University of Munich)

    • Abstract: Reproducibility is key to rigorous scientific progress. However, many publications in the computer networks community lack support for reproducibility. In this paper, we argue that the lack is mainly rooted in the additional effort that authors need to spend, without expecting sufficient benefits. Based on our experience in both authoring reproducible research and reproducing publications, we propose an ecosystem that incentivizes authors and reproducers to invest additional effort. This ecosystem consists of various building blocks, which can be combined into venue-specific profiles. A key building block is the Reproducibility Challenge, which we suggest to co-locate with annual SIGCOMM conference to leverage reproducibility research in practice.

       

  • Learning Reproducibility with a Yearly Networking Contest

    Marco Canini (KAUST); Jon Crowcroft (University of Cambridge)

    • Abstract: Better reproducibility of networking research results is currently a major goal that the academic community is striving towards. This position paper makes the case that improving the extent and pervasiveness of reproducible research can be greatly fostered by organizing a yearly international contest. We argue that holding a contest undertaken by a plurality of students will have benefits that are two-fold. First, it will promote hands-on learning of skills that are helpful in producing artifacts at the replicable-research level. Second, it will advance the best practices regarding environments, testbeds, and tools that will aid the tasks of reproducibility evaluation committees by and large.

       

  • Taming the complexity of artifact reproducibility

    Matthias Flittner, Robert Bauer (Karlsruhe Institute of Technology); Amr Rizk (Technische Universität Darmstadt); Stefan Geißler, Thomas Zinner (University of Würzburg); Martina Zitterbart (Karlsruhe Institute of Technology)

    • Abstract: Reproducing research results, as it is required for peer review, can be a time-consuming and difficult task. In this work, we propose three approaches to improve the way of how research results can be substantiated and discuss their applicability. Our proposals are based on a brief study on evaluation methods (for SDN research) and insights from a comprehensive discussion on reproducibility.

       

  • 10:30am - 11:00am Coffee Break (Foyer)

  • 11:00am - 12:30pm Session 2 - How reproducible is our research?

    Session Chair: TBD

    Room: Optimist Room (Luskin Center)

  • Keynote: Title: FindResearch.org: How to Encourage Sharing of Research Artifacts

    Christian Collberg (University of Arizona)


    Abstract: TBD

     

  • A Meta-Analysis Approach for Feature Selection in Network Traffic Research

    Daniel C. Ferreira, Félix Iglesias, Gernot Vormayr, Maximilian Bachl, Tanja Zseby (TU Wien)

    • Abstract: The selection of features for network traffic monitoring and anomaly detection is a challenge for experts who aim to build systems that discover traffic patterns, characterize networks, and improve security. There are no major guidelines or best practices for feature selection in the field. The literature is full of different proposals that ultimately depend on feature availability, types of known traffic, tool limitations, specific goals, and, fundamentally, the experts’ knowledge and intuition. In this work we have revisited 71 principal publications from 2005 to 2017. Relevant information has been curated according to formalized data structures and stored in JSON format, creating a database for the smart retrieval of network traffic analysis researches. Meta-analysis performed upon the explored publications disclosed a set of main features that are common in a considerable volume of works and could be used as a baseline for future research. Additionally, aiming for validation and generalization in network traffic research, the creation of such meta-analysis environments is highly required. It allows homogenizing and joining criteria for the design of experiments and tools, thus avoiding getting lost or becoming irrelevant due to the high complexity and variability that network traffic analysis involves.

       

  • Lessons Learned while Trying to Reproduce the OpenRF Experiment

    Mohamed Naoufal Mahfoudi, Thierry Turletti, Thierry Parmentelat, Walid Dabbous (Université Côte d’Azur, Inria, France)

    • Abstract: Evaluating and comparing performance of wireless systems, like for any other scientific area, requires the ability to reproduce experimental results. In this paper, we describe the specific issues that we encountered when focusing on reproducing the experiments described in a paper related to wireless systems. We selected the OpenRF paper published in SIGCOMM 2013, a very interesting research work allowing to perform beamforming on commodity WiFi devices. We illustrate how reproducibility is strongly dependent on the used hardware, and why an extensive knowledge of the used hardware and its design is necessary. On the basis of this experience, we propose some recommendations and lessons for the design of reproducible wireless experiments.

       

  • Testbeds Support for Reproducible Research

    Lucas Nussbaum (LORIA - INRIA - Université de Lorraine)

    • Abstract: In the context of experimental research, testbeds play an important role in enabling reproducibility of experiments, by providing a set of services that help experiments with setting up the experimental environment, and collecting data about it. This paper explores the status of three different testbeds (Chameleon, CloudLab and Grid'5000) regarding features required for, or related to reproducible research, and discusses some open questions on that topic.

       

  • 12:30pm - 2:00pm Lunch Break (Centennial Terrace)

  • 2:00pm - 3:30pm Work group discussions

    Session Chair: Olivier Bonaventure, Luigi Iannone, and Damien Saucez

    Room: Optimist Room (Luskin Center)

  • 2:00pm-3:30pm Group discussions

    Room: Optimist Room (Luskin Center)

  • 3:30pm - 4:00pm Coffee Break (Foyer)

  • 4:00pm - 5:30pm Work groups wrap-up and conclusions

    Session Chair: Olivier Bonaventure, Luigi Iannone, and Damien Saucez

    Room: Optimist Room (Luskin Center)

  • 4:00pm - 4:45pm Work group reporting

    Room: Optimist Room (Luskin Center)

  • Each work group prepares a report and presentation to discuss their conclusions

  • 4:45pm - 5:25pm Wrap-up

    Room: Optimist Room (Luskin Center)

  • Every work group present their recommendations and define action points.

  • 5:25pm - 5:30pm Closing remarks

    Room: Optimist Room (Luskin Center)

Call For Papers

Ensuring the reproducibility of results is essential in experimental sciences. Unfortunately, as highlighted recently, a large proportion of research results are hardly, if not at all, reproducible, raising reasonable doubts on the research lead by scientists around the world.

To address this important issue, the ACM has introduced a new policy on result and artifact review and badging. The policy defines clearly the terminology to be used to assess results and artifacts but does not specify the review process.

The goal of the workshop is to craft recommendations on research results and artifacts review process for conferences and journals of the SIGCOMM interest group. To achieve this goal, the workshop is structured as a forum where presenters will discuss with attendees their experience in reviewing research results and artifacts.

To prepare the discussion authors are requested to submit discussions around papers published in the various conferences in recent years, with experimental results and artifacts. Submitted papers to the workshop will provide an extended abstract with either feedback on the actual review of papers in these conferences or proposals of methods to review papers, from a results and artifact perspective. To emulate discussions before but also after the workshop, an open virtual forum is set in place to exchange ideas and experiences: http://reproducibility.lisp-lab.org.

Topics of Interest

Topics of particular interest include, but are not limited to:

  • Reviewing experience;
  • Enabling environments and tools for reproducibility;
  • Methods and tools to anonymize datasets;
  • Testbeds for reproducible research;
  • Platforms to share experimental results and artifacts.

The workshop will try to answer questions rose by the ACM such as:

  • Should reviews occur before or after acceptance of a paper?
  • How many reviewers should there be?
  • Should the reviewers be anonymous, or should they be allowed to interact openly with the authors?
  • How should artifacts be packaged for review?
  • What specific metrics should be used to assess quality?

Contact workshop co-chairs.

Submission Instructions

Submissions must be original unpublished work. Submitted papers must be at most three (3) pages long, including all figures, tables, references, and appendices in two-column 10pt ACM format. Papers must include authors names and affiliations for peer reviewing by the PC. Authors are invited to follow the terminology defined by the ACM Result and Artifact Review and Badging publication policy. Authors of accepted papers are expected to present their papers at the workshop. The outcomes of the workshop will be published in ACM Computer Communication Review (CCR).

To guarantee freedom to attend to everyone, remote attendance will be possible for the workshop.

Paper registration and submission can be done via HotCRP at: https://sigcomm17reproducibility.hotcrp.com/.

Important Dates

  • January 25, 2017

    Opening of the discussion forum and submission system (OPEN NOW)

  • March 24, 2017 March 31st, 2017 9:00pm PST

    Submission deadline

  • April 28, 2017

    Acceptance notification

  • May 29th, 2017 9:00pm PST

    Camera ready deadline

Authors Take Note

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to TWO WEEKS prior to the first day of your conference. The official publication date affects the deadline for any patent filings related to published work.

Committees

  • Workshop Co-Chairs
  • Olivier Bonaventure

    Universitée catholique de Louvain, Belgium

  • Luigi Iannone

    Telecom ParisTech, France

  • Damien Saucez

    Inria, France

  • Technical Program Committee
  • Amogh Dhamdhere

    CAIDA

  • Anja Feldmann

    Technische Universität Berlin

  • Bob Lantz

    Open Networking Laboratory

  • Gwendal Simon

    Telecom Bretagne

  • Hamed Haddadi

    Queen Mary University of London

  • Joerg Ott

    Technische Universität München

  • Klaus Wehrle

    RWTH Aachen University

  • Marco Fiore

    CNR – IEIIT

  • Marco Mellia

    Politecnico di Torino

  • Matthias Wählisch

    Freie Universität Berlin

  • Mohamed Naoufal Mahfoudi

    Inria

  • Phillipa Gill

    University of Massachusetts — Amherst

  • Tom Henderson

    University of Washington

  • Tristan Henderson

    University of St Andrews

  • Yibo Zhu

    Microsoft