ACM IMC 2016 Works-in-Progress Session
The 2016 Internet Measurement Conference will include a Work-in-Progress, Tools, and Datasets (WIP) session on Monday, November 14, 2016 (17:45-18:45, directly following the last main conference session).
The WIP session is an opportunity for researchers to give a very short presentation at IMC about Works-in-Progress as well as tools or datasets that are available to the community. The WIP is not a publication venue, but a forum for the exchange of ideas with peers from the community. The format will be short “lightning” talks with time for just a one or two questions afterward.
Program
Datasets/Tools
-
OpenINTEL: Large-Scale, Long-Term Active DNS Measurement Data for Network and Security Research
Roland van Rijswijk-Deij, Mattijs Jonker, Anna Sperotto, and Aiko Pras (University of Twente) slides -
ENTRADA: an open-source Data Streaming Warehouse for DNS Operators Giovane C. M. Moura, Maarten Wullink, Moritz Müller, and Cristian Hesselman (SIDN Labs)
-
AMON: An Extensible Open Source framework for Online Monitoring, Statistical Analysis and Forensics of Multi-Gigabit Streams
Abhishek Balaji Radhakrishnan (Merit Network, Inc., University of Southern California), Michalis Kallitsis (Merit Network, Inc.), Zheng Gao and Stilian Stoev (University of Michigan), and George Michailidis (University of Florida) slides
v6 and DNS
-
Yarrp’ing the IPv6 Internet Eric Gaston and Dr. Robert Beverly (Naval Postgraduate School)
slides -
A Continuing Study of the Active IPv6 Address Space David Plonka and Arthur Berger (Akamai) slides
-
A Lightweight Looking Glass for Flexible DNS Diagnostics and Measurement Casey Deccio (Brigham Young University) slides
Tools/Platform/RG
-
A New MAP: Measurement and Analysis for Protocols Research Group David Plonka (Akamai) and Mirja Kühlewind (ETH Zurich)
slides -
fling: A Middlebox Measurement Platform
Runa Barik and Michael Welzl (University of Oslo, Norway) and Ahmed Elmokashfi (Simula Research Laboratory, Norway) slides -
PEERING: An AS for Us: Giving researchers control over real BGP routing Ethan Katz-Bassett, Brandon Schlinker (USC), Bruno Vinicius, Italo Cunha (UFMG) slides
Mapping and Monitoring
-
Towards A Longitudinal Study of Adoption and Effectiveness of RPKI-Based Route Filtering Andreas Reuter (Freie Universitat Berlin), Brandon Schlinker and Ethan Katz-Bassett (USC), Italo Cunha (UFMG), Randy Bush (IIJ), Thomas C. Schmidt (HAW Hamburg), Matthias Wahlisch (Freie Universitat Berlin), and Doug Montgomery (NIST) slides
-
Mapping the Internet Backbone Muhammed Abdullah Canbaz and Mehmet Hadi Gunes (Department of Computer Science and Engineering University of Nevada, Reno, NV, United States)
slides -
Crowdsourcing Physical Infrastructure Mapping with net.tagger Dan Woodman, Riqui Schwamm, Justin Rohrer, and Robert Beverly (Naval Postgraduate School)
slides -
Operating System Classifier using Machine Learning Ahmet Aksoy and Mehmet Hadi Gunes (University of Nevada, Reno) slides
-
A Fork in the Road: Measuring Ethereum’s Recent Hurdles Lucianna Kiffer, Alan Mislove, and Rajmohan Rajaraman (Northeastern University)
If Time Permits (Late submissions)
-
Browser based network troubleshooting using machine learning techniques Maximilian Bachl (Université Pierre et Marie Curie) and Renata Teixeira (Inria Paris) slides
-
A Look at the Optical Layer of Datacenter Networks Danyang Zhuo (University of Washington) and Monia Ghobadi and Ratul Mahajan (Microsoft Research) slides
Abstracts
-
OpenINTEL: Large-Scale, Long-Term Active DNS Measurement Data for Network and Security Research
Roland van Rijswijk-Deij, Mattijs Jonker, Anna Sperotto, and Aiko Pras (University of Twente)
Abstract: Earlier this year, we presented OpenINTEL, a large-scale, long-term active DNS measurement system [1]. This measurement platform actively collects DNS data for a significant proportion of the global DNS namespace on a daily basis. Since publication, we have made a number of advances, that we would like to share with the community. We have extended the measurement coverage with: - Four ccTLDs, .nl, .se, .nu and .ca - Three ‘old’ generic TLDs, .info, .pro and .mobi - All new gTLDs available through ICANN’s Centralized Zone Data Service (this currently covers 1133 new gTLDs) This means that the dataset currently covers over 192 million domain names, which is around 60% of the DNS namespace. Next to additional TLDs, we have also extended coverage to other sources of domain names, such as the Alexa top 1 million and DNS-based blacklists. In addition to extending the coverage of our measurement, we have gained experience in using this data for different research purposes. We have used the data to study the adoption of both new Internet standards, as well as the uptake of various Internet services, for e.g. mail handling and DDoS protection. We have also used the data to study security phenomena related to the DNS, work that is currently under review. Finally, we have worked with the CSIRT community in a major research network to use this data to actively analyse emerging threats (in this case so-called “CEO fraud”). Finally, we are performing educational outreach, for example, through a workshop co-located with the European AIMS conference [2]. In that workshop, attendees were given the opportunity to work with OpenINTEL data and to gain hands-on experience in using big data analysis tools from the Hadoop ecosystem. By presenting this work in the WIP session, we would like to call attention to the availability of this data for research, share our experience with the data so far, and discuss the possibilities for collaboration. [1] van Rijswijk-Deij, R., Jonker, M., Sperotto, A., & Pras, A. (2016). A High-Performance, Scalable Infrastructure for Large-Scale Active DNS Measurements. IEEE Journal on Selected Areas in Communications, 34(7) [2] http://www.aims-conference.org/2016/labs.html -
ENTRADA: an open-source Data Streaming Warehouse for DNS Operators Giovane C. M. Moura, Maarten Wullink, Moritz Müller, and Cristian Hesselman (SIDN Labs)
Abstract: Researchers often face challenges in processing large datasets. Often, they rely upon cluster-based solutions as a way to achieve better performance, scalability, and dependability or other non-relational databases (NoSQL). Differently from traditional ``only once’’ analysis on snapshot datasets, in this presentation we focus on high-performance cluster solutions that~(i)~are designed to both ingest and process continuous streams of network data and~(ii) combine long term data storage, delivering interactive response times (seconds or few minutes) on both historical (years) and recent data. Such solutions are referred in the literature as data streaming warehouses (DSW). We present ENTRADA[0], a Hadoop-based DSW we built to store continuous DNS traffic to our .nl (The Netherlands) country-code top-level domain (ccTLD) authoritative servers. While storing raw pcaps would be unfeasible, ENTRADA leverages the best principles for data engineering and parallel processing: it employs column-file storage (Apache Parquet), which is more efficiently and less resource intensive than raw pcaps and uses Cloudera Impala for parallelizing data access, ultimately delivering near real-time response times. ENTRADA has been presented in two peer-reviewed conferences [1,2] and has been used by .nl for more than two years in our daily analysis (e.g.: [3,4]) and a handful of TLDs registries since we made it publicly available. We believe other DNS researchers, DNS operators, and TLD operators can profit and contribute to our open-source project. [0] http://entrada.sidnlabs.nl [1] https://www.sidnlabs.nl/downloads/sidn-noms2016_EN.pdf [2] https://www.sidnlabs.nl/downloads/whitepapers/sidn-ecrime.pdf [3] https://www.sidnlabs.nl/downloads/presentations/sidn-annet2016.pdf [4] https://stats.sidnlabs.nl -
Herding ponycorns: tracing distributed systems with DTrace Graeme Jenkinson and Robert Watson (University of Cambridge) and ArunThomas (BAE Systems Inc)
Abstract: Dynamic tracing, and DTrace specifically, was borne out of a simple necessity: “performance analysis infrastructure has generally not kept pace with the shift to in-production performance analysis” [1]. In 2016, in production almost always means a distributed system. The ubiquity of distributed systems has once again opened up a gap between the types of in-production analysis we’d like to perform and the available infrastructure. Tools such as Google’s Dapper or Pivot Tracing have made some progress in closing this “analysis gap”. However, existing tools tend to be limited either in scope or to a specific language runtime. This talk will present a prototype distributed tracing framework build on top of DTrace. Our prototype preserves DTrace’s key requirements of production safety and working seamless across the software stack. In addition the system is capable of correlating and tracking causally related events across a distributed system. The central component of our work-in-progress prototype is a host agent (written in Rust). The host agent interfaces with libdtrace as a DTrace consumer, compiling and sending D-Language scripts to the kernel for execution. Tasking of the host agent is performed by publishing scripts into Apache Zookeeper (a distributed cluster management and configuration system). Instrumentation records are processed by the host agent and before writing them to a distributed commit log (such as Apache Kafka) or to a TCP stream. Upstream consumers can then apply further processing for example, aggregating records or imposing an ordering. Instrumentation latency sensitive applications, such as networking, can be problematic. Often probe effects are sufficiently large that they modify the very behaviour being investigated. The talk will summarize work in understanding and systematically measuring DTrace’s probe effect. This is a precursor to driving improvements to DTrace that enable it to be used in a broader range of applications. [1] Cantrill, Bryan, Michael W. Shapiro, and Adam H. Leventhal. “Dynamic Instrumentation of Production Systems.” USENIX Annual Technical Conference, General Track. 2004. -
Yarrp’ing the IPv6 Internet Eric Gaston and Dr. Robert Beverly (Naval Postgraduate School)
Abstract: The IPv6 Internet has grown significantly in size and importance in recent years. For example, the number of IPv6 routes in the BGP system has increased from fewer than 5,000 in 2011 to more than 35,000 today, while native IPv6 adoption and traffic continues its exponential increase. While there have been many studies on actively mapping the topology of the IPv4 Internet, there have been relatively few that examine the IPv6 Internet. Due to the sheer size of the IPv6 address space, prior work has either sparsely sampled the topology (for instance, tracerouting to the ::1 in each globally announced prefix), or expended considerable time and active probing budget (for instance, by exhaustively probing one address in all /48’s). In this work, we focus on the speed and scale of Internet-wide IPv6 active topology mapping by extending Yarrp (Yelling at Random Routers Progressively, a technique and tool appearing at IMC 2016). Yarrp provides the ability to map the IPv4 Internet at a much quicker speed then current tools, primarily by being stateless and randomizing the order of active probes. By extending Yarrp to probe IPv6, our hope is to advance the state-of-the-art in IPv6 active topology mapping. IPv6 presents some unique challenges with regard to retaining the stateless nature of Yarrp. First, IPv6 headers have removed some of the fields used by Yarrp in IPv4 to encode state, for instance the 16-bit IP identification field which Yarrp uses to encode the originating probe’s TTL. Second, IPv6 is known to more aggressively rate-limit ICMP6. Conversely, ICMPv6 affords the advantage of complete packet quotations, thereby greatly simplifying state recovery. Rather than being forced to encode state into the probe’s packet headers such that a partial packet quotation contains Yarrp state, the ICMPv6 RFC requires as much of the packet that induced the TTL exceeded message to be returned as possible. Not only does this allow us to encode and recover more state, it facilitates an easy path for changing the probe transport protocol from TCP to UDP or ICMPv6 – protocols that are known to elicit more (and different) responses. Our planned active mapping experiments include both increased coverage and speed. For instance, while prior work probed a target in each /48 of all advertised /32 prefixes (equating to, at the time, 406,388,736 unique /48 prefixes) over a period of more than 4 month, we anticipate Yarrp6 completing the task in 36 hours at 100Kpps. Thus, our hope is that Yarrp6 will facilitate more complete scanning of the IPv6 Internet, and permit a better understanding of the IPv6 topology. -
A New MAP: Measurement and Analysis for Protocols Research Group David Plonka (Akamai) and Mirja Kühlewind (ETH Zurich)
Abstract: The IRTF Measurement and Analysis for Protocols Research Group (MAPRG) was chartered in August 2016. The MAPRG is a venue for presenting timely Internet measurement work that is directly applicable to existing Protocols and their operation. Unfortunately, IMC 2016 collides with IETF and the first official meeting of MAPRG in Seoul, November, 2016. One of the group’s chairs, Mirja, will run the meeting in Seoul, while the other chair, Dave, attends IMC and would like to present MAPRG to the IMC audience as a “meta” Work-in Progress, i.e., a new venue for researchers and students to present their nascent work for consideration by, and in collaboration with, the Internet engineering and operations communities. While the MAPRG is ostensibly new, it has met earlier (as a proposed research group), indeed for four times over a year and half, and has hosted more than 25 Internet measurement presentations during IETF meetings, by both academics and practitioners. The MAPRG offers opportunities for communities to connect and for research work to be evaluated early and often. As such, we hope it offers a unique new target for student work, engaging them more in the way our Internet actually needs measurements to understand and improve it. Our formal charter is as follows: Background Our Internet has grown into something that differs from what was envisioned. Its protocols sometimes operate in an environment other than than that for which they were designed. For instance, some network elements treat some protocols differently than others and those protocols themselves are sometimes reused and abused in ways initially unforeseen. The Measurement and Analysis for Protocols Research Group (MAP) Research Group (RG) explores such phenomena by measurement with the aim to inform protocol engineering and practice. Many protocol engineering efforts in a standards development context, as well as best practices for the operation of IETF-defined protocols, can benefit from insight provided by Internet measurements of various kinds. Likewise, Internet measurement research efforts can stand to gain from contacts with the IETF. The Measurement and Analysis for Protocols Research Group Research Group (MAPRG) aims to provide a forum for interchange between these two communities, supporting: * exchange of measurement-derived insight; discussion of techniques and best practices for measurement relevant to protocol * engineering and network operations; * collaborations to share data supporting these measurements; and * a “landing pad” for the Internet measurement community to introduce its efforts to the * IETF. Membership Membership in the MAPRG is open to all interested parties. Meetings The MAPRG will meet one to three times per year, as deemed necessary by the chairs and according to demand. At least one MAPRG meeting will be co-located with an IETF meeting per year. Given the MAPRG’s charter to bridge the gap between Internet standards and measurement communities, the MAPRG may also meet collocated with relevant academic conferences or network operator forums, as appropriate. Meetings are by default open with open attendance and published proceedings, with remote participation and recording as provided by the meeting venue, according to the IRTF’s IPR policy. This is always the case with at least one MAPRG meeting co-located with an IETF meeting noted above. However, as deemed necessary, the chairs may hold virtual or physical meetings with restricted attendance to discuss observations which cannot be shared openly, provided that some outcome of such a meeting may be openly shared with the community. -
A Continuing Study of the Active IPv6 Address Space David Plonka and Arthur Berger (Akamai)
Abstract: At IMC 2015, we introduced IPv6 address classification, temporally and spatially, based on about a year-long longitudinal study of active WWW client addresses observed by a large CDN. In this session, we will provide a glimpse of our ongoing measurements based on this work, across three years, and our classifications based on the ever-increasing count of active IPv6 WWW clients seen by a large CDN. We will also show a visualization of the active IPv6 Internet so the audience can easily identify the 5000+ networks that participate in the IPv6 Internet today. Today, we see about 1 billion active IPv6 /64 prefixes per month, covering more than 11 billion unique IPv6 client addresses (per month). Even though IPv6 capability is thought to be at between 10 and 20% deployment worldwide, these numbers easily eclipse the numbers of active IPv4 clients ostensibly active on the World-Wide Web. This highlights how the IPv6 Internet can be quite different in its use of addresses as transport endpoints, affecting many aspects of Internet operation and study where IP addresses are used as identifiers. -
Mapping the Internet Backbone Muhammed Abdullah Canbaz and Mehmet Hadi Gunes (Department of Computer Science and Engineering University of Nevada, Reno, NV, United States)
Abstract: Interest in Internet’s topology characteristics has been ongoing for more than two decades. As largest man made complex network, Internet brought tremendous changes in our daily life. Over the decades several paradigms enabled new directions for network practitioners and end users. However, the router level topology is still not well understood as ASes have different optimization goals and approaches. As Internet lacks a common measurement mechanism, researchers develop ad-hoc mechanisms to uncover underlying connectivity. Several research groups have also developed public platforms where their extensive measurement data is shared with other researchers and network practitioner. In this work, we build on the data from such resources to extensively map the Internet backbone. To achieve such mapping, we first obtain IP addresses along with BGP announcements from different platforms, and then perform traces toward every observed IP address from each ingress of the destination AS. Finally, we perform unresponsive router, subnet and IP alias resolutions to reveal the underlying backbone of the router level Internet. -
A Lightweight Looking Glass for Flexible DNS Diagnostics and Measurement Casey Deccio (Brigham Young University)
Abstract: An understanding of Domain Name System (DNS) health through qualitative and quantitative measurement techniques can help inform deployment practices to maintain the robustness required for such a critical system. Carefully designed DNS queries and their responses (or associated network errors) form the basic data with which DNS health is assessed. However, because of the great network path diversity between DNS clients and servers, query/response visibility from diverse vantage points is necessary for greater fidelity of quantitative and qualitative measurement. In this paper we discuss the challenges associated with measuring a large, distributed, network system such as the DNS, propose a DNS looking glass solution for increased measurement fidelity, and demonstrate its performance. -
AMON: An Extensible Open Source framework for Online Monitoring, Statistical Analysis and Forensics of Multi-Gigabit Streams
Abhishek Balaji Radhakrishnan (Merit Network, Inc., University of Southern California), Michalis Kallitsis (Merit Network, Inc.), Zheng Gao and Stilian Stoev (University of Michigan), and George Michailidis (University of Florida)
Abstract: The Internet is susceptible to a plethora of vulnerabilities and network-based attacks, including Distributed Denial of Service (DDoS) attacks, network scanning, etc. Efficient monitoring of network assets is thus key to prevention and mitigation of these security threats. Post-processing, vast amounts of metadata (e.g., Netflow data), to perform forensics analysis requires an excessive amount of time, often not tolerable by network operators. Timely detection of these nefarious activities in real-time is, therefore, paramount, but is a challenging task given the growth of Internet traffic. While commercial alternatives exist, these solutions are often expensive and not affordable for everyone. We present extensions and future directions for a monitoring framework recently developed and deployed at Merit Network, Inc., named AMON (All-packet MONitor) [1]. AMON is an open-source tool that can process packets at multi-10Gbps speeds. It leverages the high-performance packet monitor PF_RING and is readily deployable using off-the-shelf commodity hardware. AMON provides data products, called ‘databricks’, which enable real-time visualizations, heavy-hitter identification, and detection. A ‘databrick’ is a 2D heatmap matrix that depicts information about traffic structure and intensity. AMON enables heavy-hitter identification via an extension of the Boyer-Moore majority algorithm, and automated attack detection using statistical models [1]. We have been working on adding several new modules for AMON. We started with a module that enables AMON data products to be transmitted and received by network operators in a serialized Cap’n Proto format. We are also extending the heavy-hitter identification module to report information about ports and protocols; our future plans include adding IPv6 functionality as well. Given that computing the hash of an incoming IP in an efficient manner is critical, we started looking at tabulation-based hashing techniques, and in particular 4-way universal hash functions. Another important new functionality is interactive filtering that would allow network engineers to be able to zoom into events of interest in real-time. We want to enable hash-based filtering, i.e., focus on the streams of IPs that get hashed into a particular bin. We want to improve our detection methods to make them suitable to detect low-volume, seemingly innocuous events such as scanning. Using a sequence of 2D databricks as our input for our detection methods, we seek to model the in- and out-degrees of these matrices to detect statistical outliers that would signify abrupt changes in the community structure of the network. Viewing the network traffic as a directed graph, the corresponding adjacency and the co-connectivity structures may reveal signatures in the attacks that are otherwise not obvious. It is critical to have these methods account for historical databrick observations, since nefarious scanning efforts try to evade detection by spreading scanning targets across space and time. Some future goals also include extending this framework to include modules for monitoring applications such as DNS and other types of signals in parallel. This could enable detection of botnets and other network-wide anomalies. The AMON toolset, along with assorted real-world datasets, will be made publicly available to the research and operational community via the DHS IMPACT project. Reference: [1] M. Kallitsis and S. A. Stoev and S. Bhattacharya and G. Michailidis. AMON: An Open Source Architecture for Online Monitoring, Statistical Analysis, and Forensics of Multi-Gigabit Streams. IEEE Journal on Selected Areas in Communications, Volume: 34, Issue: 6, Pages: 1834-1848, June 2016 -
Crowdsourcing Physical Infrastructure Mapping with net.tagger Dan Woodman, Riqui Schwamm, Justin Rohrer, and Robert Beverly (Naval Postgraduate School)
Abstract: While the logical topology of the Internet has been well-explored, e.g., decades of work in collecting and analyzing IP, router, or AS-level topologies, its physical topology is less well-understood. Examples of physical topology include the geographic location of communication cables, their physical paths through the ground or ocean, and the location of the devices and buildings they connect. Current approaches to physical topology mapping include parsing router DNS pointer records (which is difficult, coarse-grained, not always possible, and inaccurate) and mining publicly available information including published provider maps, environmental impact reports, and utility regulatory filings (which are incomplete and relatively inaccurate). As a complementary approach, we propose and develop net.tagger, a crowd-sourced physical infrastructure mapping service. The basic idea is simple: indicators of physical network infrastructure and their owners are everywhere. For instance, a communications manhole cover in the street is clearly distinguishable from other utilities, while dig markings (temporary spray paint delineations in the street used during construction) are orange for telecom and indicate provider and communication line path at a minimum. Other infrastructure is similarly visible and easily discoverable, including handholes, duct markings, dig warnings, cell towers, and even provider central offices and co-location facilities. To exploit this wealth of available information we create net.tagger, an Android app available in the Google playstore. net.tagger allows users to simply and easily ``tag’’ infrastructure using their mobile device, e.g., as the user is walking and discovers a street-level indicator of infrastructure. In addition to recording the geo-coordinates of the infrastructure, the tag can include additional meta-data including the infrastructure type, provider name, picture, and the phone’s orientation. This meta-data is copied to our server where it is integrated into a global infrastructure map (based on OpenStreetMaps to facilitate future interoperability and openness) and stored for later analysis. A significant future research challenge is aggregating tags and deriving a higher-level understanding of the infrastructure. The success of any crowd-sourced system depends on adoption, and net.tagger is no exception. Now that we have developed the basic system and app, we are experimenting with various means to incentivize users, including a per-country / per-user or team ranking-system, and bounties for discovering particular types of infrastructure or infrastructure that helps validate inferences. In this work-in-progress talk we seek to: i) introduce net.tagger as a tool for the community; and ii) solicit feedback on our approach and research directions. -
Towards A Longitudinal Study of Adoption and Effectiveness of RPKI-Based Route Filtering Andreas Reuter (Freie Universit"{a}t Berlin), Brandon Schlinker and Ethan Katz-Bassett (USC), Italo Cunha (UFMG), Randy Bush (IIJ), Thomas C. Schmidt (HAW Hamburg), Matthias Wahlisch (Freie Universit"{a}t Berlin), and Doug Montgomery (NIST)
-
fling: A Middlebox Measurement Platform
Runa Barik and Michael Welzl (University of Oslo, Norway) and Ahmed Elmokashfi (Simula Research Laboratory, Norway) -
Operating System Classifier using Machine Learning Ahmet Aksoy and Mehmet Hadi Gunes (University of Nevada, Reno)
-
A Fork in the Road: Measuring Ethereum’s Recent Hurdles Lucianna Kiffer, Alan Mislove, and Rajmohan Rajaraman (Northeastern University)
Call for Works in Progress
Topics of interest include all topics listed on the general IMC 2016 call-for-papers, but as a WIP session, we look for early work of interest to the community, even if it is not yet complete, and for tools or datasets that of use to the community, even if the tool or dataset by itself may not warrent a full paper.
A summary of the WIP presentations will be included as an editorial in SIGCOMM CCR. Presenters may opt out of having their WIP covered in this article, but the default will be to include a summary of each presentation.
The WIP session chair is David Choffnes; please contact him for questions.
Review process, criteria and Submissions Guidelines
We expect to take many works-in-progress, with the goal of sharing early results rather than providing final work. Submissions should not be previously published in a workshop, conference, or journal. Previously presented poster and demo ideas are ok, as are descriptions of previously published tools/datasets that have new features.
We expect to lightly review submissions for appropriateness, and we may prioritze for time. There will be a “soft” deadline at which point we will select submissions for inclusion in the program and post it publicly. Space permitting, we will include WIP submissions up until the day before the event.
The submission format is text-only abstracts, along with author names and affiliations. You are limited to 500 words, and we will not accept or review any PDFs.
The submission site is here, but note that the program is currently full and any new submissions will be waitlisted in case of no-shows.
Ethical Considerations
WIP talks, tools, and datasets have the same ethical considerations as general IMC talks (as described in the general IMC 2016 call-for-papers).
Important Dates
~~WIP “soft deadline” abstract submission: | October 14, 2016~~ |
~~Notification: | October 21, 2016~~ |
~~Initial program published: | October 24, 2016~~ |
WIP “hard deadline” (only if existing WiPs withdraw): | November 13th, 2016 |
WIP session: | November 14, 2016 (17:45-18:45) |