Half-day Tutorial: TouchNDN: Visual Programming of Data-Centric Video
This half-day tutorial explores a practical data-centric approach to low-latency video that offers interesting new capabilities for creators of media-rich experiences and applications. It will introduce the authors’ data-centric video concept and a specific implementation for Derivative’s TouchDesigner real-time media processing platform using Named Data Networking (NDN). The implementation, TouchNDN, enables a hybrid design paradigm for low-latency video that combines properties of streams, buses, and stores. This approach unifies real-time live and historical playback, and is used to support edge-assisted machine learning. Through a combination of case study examples, hands-on experimentation, and technical background presentation, the tutorial will examine how TouchNDN can be used to explore a data-centric approach to building media-rich distributed applications.
Peter Gusev is a staff software engineer with UCLA REMAP.He focuses primarily on Named Data Networking (NDN),and is leading the development of NDN-RTC, a real-time conferencing tool. In addition to his work on NDN, Peterhas contributed to a wide range of interactive audiovisual projects. He holds a master’s degrees from Bauman MoscowState Technical University and Wroclaw University of Technology.
Jeff Burke is Professor In-Residence and Associate Dean forTechnology and Innovation at the UCLA School of Theater,Film and Television (TFT), where he has been a faculty membersince 2001. His research explores the intersections ofthe built environment, computer networks, and storytelling. Burke co-founded REMAP, a joint center of TFT and the Henry Samueli School of Engineering and Applied Science,which uses a mixture of research, artistic production, and community engagement to investigate the interrelationships among culture, community, and technology. He is Co-PI andapplication team lead for the Named Data Networking research project.
This tutorial introduces key concepts and emerging visual programming tools for working with low-latency data-centric video over Named Data Networking (NDN). It builds on a number of previous tutorials and is based on material recently published in our upcoming paper on Data-Centric Video for Mixed Reality . At previous ICN conferences, the NDN team has presented tutorials on the basics of the architecture and its software, to promote related research and to encourage community contribution to the open-source plat- form. Earlier tutorials focused primarily on introductions to NDN. Last year, the team focused on how NDN can be used to support mobile-edge computing (MEC) paradigms that offload computation and other tasks performed on low-latency data streams. This tutorial will be substantially different in focus and form, leaving behind the basics of Interest/Data exchange and lower level protocols in order to focus on what NDN offers to developers and content creators working with moving images–using 2D video as an example but pointing to opportunities in emerging forms such as volumetric video. We will discuss important architectural concepts that we are exploring in these areas, the software we have built to explore them in NDN, and open issues.
In a preceding side event to ICN'19, we will introduce TouchNDN, an NDN toolkit (software and protocols) for the real-time media processing environment TouchDesigner. The event’s audience will be media artists and others interested in what information-centric networking might offer to their field. In this tutorial, we will unpack more details of how TouchNDN works for the ICN community.
Type of the Tutorial
This tutorial is a half-day session with an optional hands-on component. It will present data-centric approaches to video streaming and a higher-level data-centric API for building media-rich systems. The tutorial is structured around the presentation of the TouchNDN package, a multi-component NDN toolset for TouchDesigner. We expect the duration of the tutorial to be approximately 4 hours with two 10 minute breaks. Though we expect the tutorial to be of interest to researchers with all levels of NDN experience, this will not be a basic introduction to NDN applications. It will be an intermediate-level tutorial that requires either some basic experience with NDN or similar ICN architectures, or a willingness to follow along with topics that build on basics only covered briefly.
Outline of the Tutorial
1) Welcome and Introduction (15 minutes)
We will open with a brief overview of the main principles of NDN, its architecture and existing NDN development tools. We will motivate the need high-level programming abstractions for data-centric development, which will be used later in the tutorial, describing how these abstractions can simplify application design and encourage data-centric development that takes advantage of the architecture’s capabilities. We will then briefly discuss limitations of the TCP/IP architecture for emerging media applications and how they can benefit from NDN.
2) Data-centric video (15 minutes)
Next, we will give an overview of a data-centric paradigm for low-latency video streaming over NDN, including motivations, potential benefits, and challenges. This is described in more detail in . By using recent projects at UCLA REMAP as case studies, we will present typical application designs for video consumers and producers and discuss benefits (and challenges) of the data-centric approach to video for interactive and non-linear applications. We will focus specifically on the following capabilities: 1) direct random access to video frames; 2) receiver-driven streaming of spatial regions-of-interest; 3) seamless live/historical playback; 4) multi-producer video multiplexing.
3) TouchDesigner (10 minutes)
This section will introduce Derivative’s TouchDesigner – a visual node-based programming environment for 2D and 3D media processing used in many interactive systems. We will review the main components of the system, its principles and briefly explain how it can be expanded using the Python and C++ programming languages. We will also explain our motivation for using this platform for data-centric development and how one might it expect data-centric video to be exposed and manipulated in the platform.
For a deeper exploration of TouchDesigner, we will encourage tutorial participants to attend the free side event earlier in the week.
4) TouchNDN (Hands-on) (45 minutes)
Following introductions of the TouchDesigner programming environment, we will present TouchNDN – a TouchDesigner framework for data-centric video development using NDN. It will be introduced through a series of pre-built examples that can be followed along by those who have installed the software on their laptops in advance.
For motivation, this section will continue to explore the case studies introduced earlier. We will demonstrate the use of high-level data-centric programming abstractions of TouchNDN by walking the audience through the process of creating a prototype for a distributed interactive media system. We will also touch on how the demonstrated examples might be implemented using IP and discuss challenges and design compromises of both approaches.
5) Underlying libraries (15 minutes)
This section will review the libraries underneath TouchNDN that are used to implement data-centric video and provide a high-level abstraction for working with NDN data. Specifically, it will introduce NDN-RTC – our real-time data-centric media streaming library – and the NDN Common Name Library (NDN-CNL), and explain their motivations. We will discuss the motivation and design approach for both of these high-level libraries, as well as their evolution, and use in our group’s applications. For NDN-RTC, we will briefly describe the library architecture, API, codebase and demonstrate additional standalone tools for video streaming and stream inspection. For NDN-CNL, we will be focused on higher level data-centric programming abstractions, such as Names- paces and Namespace Handlers, and how these relate to and are used with NDN-RTC in TouchNDN.
6) Unpacking the examples (90 minutes)
After the introduction to the libraries, we will dive deeper into how the presented demo and prototypes are implemented using Touchdesigner, NDN-CNL, and NDN-RTC, and how the abstractions provided by the libraries translate into low-level networking with Interest/Data packets. We will also cover the design choices in mapping data-centric video into TouchDesigner’s node-based programming envi- ronment. This section is designed to encourage participants to follow along and explore on their own in TouchDesigner with their laptops. Participants will be given time to actively contribute to the discussion.
7) Conclusion and wrap-up discussion (20 minutes)
We’ll conclude the formal tutorial with a summary of future work and potential research and collaboration opportunities, and transition into an audience-driven wrap-up discussion.
Requirements for Attendees
Attendees wishing to follow (optionally) along must bring a macOS laptop on which they have pre-installed and tested the full NDN platform. They must also pre-install the lat- est version of TouchDesigner and TouchNDN framework (distributed via the mailing list ahead of the tutorial). Time will not be allocated in the tutorial for troubleshooting participants’ installations. We will provide limited email sup- port to participants who encounter any trouble with these pre- installation steps in the weeks leading up to the tutorial.
Attendees should have some reasonable conceptual and practical familiarity with the NDN architecture and the fundamentals of Interest/Data exchange. Ideally, they should be comfortable with Python, as well as with getting around in the Unix shell. No prior familiarity with TouchDesigner is required.
 Peter Gusev, Jeff Thompson, and Jeff Burke. 2019. Data-Centric Video for Mixed Reality (Invited Paper). In ICCCN'19, Valencia, Spain.