The 3rd Workshop on Networks for AI Computing (NAIC)
Generative AI is transforming many aspects of modern society with content ranging from text and image to videos. The Large Language Models (LLMs) and other Artificial Intelligence (AI)/Machine Learning (ML) that enable these generative AI capabilities are placing an unprecedented amount of pressure on modern data centers with anecdotal evidence suggesting that the largest model can take months to train. To support these models, modern distributed training clusters contain tens of thousands of GPUs/TPUs, with many expecting the scale to further increase significantly.
More fundamentally, training these large models introduces network communication patterns that require sophisticated and novel topology, routing, and synchronization. As the adoption and use of such models grows, the data generated and the data required to train and make inferences will place emphasis on the design of novel network primitives. The scale, workload, and performance requirements require us to reconsider every layer of the network stacks and scrutinize the solution from a holistic perspective. The recent industry initiative, Ultra Ethernet Consortium (UEC), is actively working on Ethernet-based network optimizations for AI and HPC workloads. The Open Compute Project (OCP) is geared more toward infrastructure support for AI computing. The standard organizations (e.g., IETF) are also seeking opportunities in networking for AI computing. We believe the networking research community should take a bolder position and bring cutting-edge innovations in this front as well.
The workshop aims to bring together researchers and experts from academia and industry to share the latest research, trends, and challenges in cloud and data center networks for AI computing. We expect it to enrich our understanding of the AI workloads, communication patterns, and their impacts on networks, and help the community to identify future research directions. We encourage lively debate on issues like convergence vs disaggregation, front-end vs back-end, smart edges vs programmable core, and the need for new interconnection, new topology, new transport, and new routing algorithms and protocols.
Topics of interest include, but are not limited to:
- Technologies for RDMA and Ethernet efficiency, performance, security, and extensibility
- Load balancing for distributed learning
- Lossless and loss-tolerant network design
- Host and network integration and coordination
- New transport protocols and congestion control for AI training
- Programmable congestion control
- New network architecture and topologies for AI and HPC
- Offloading in SmartNIC/DPU, host hardware, switch
- Scale-out and scale-up network convergence
- Programmable networks for AI workload
- In-network computing techniques and protocols for distributed training and MPI support
- Application-aware networking for AI training and inference
- Collective communication optimization
- Networking for cross-DC learning
- Network optimization for inference
- Convergence of computing, storage, and networking
- Automated and intelligent AI DCN OAM
- LLM for DCN OAM
- Fault prediction, detection, and root cause analysis
- New measurement and telemetry metrics and methods
- Green data center for energy efficiency
- Traffic characterization for AI workload
- Network simulation and benchmarking for AI workloads
- Networking support for Agentic AI
We invite researchers and practitioners to submit original research papers, including position papers on disruptive ideas, and early-stage work with a potential for full papers in the future.
Reviewing will be double-blind. Authors must make a good faith effort to anonymize their submissions. Papers must not include author names and affiliations, and avoid implicitly disclosing the authors’ identity (e.g., via self-citation, funding acknowledgments).
We accept two types of submissions:
- Regular research papers of up to 6 pages, excluding references and appendices. Submissions must be original, unpublished work, and not under consideration at another conference or journal. Authors of accepted submissions are expected to present their work at the workshop. Accepted submissions will be included in the workshop proceedings.
- Extended abstracts of up to 2 pages, excluding references, in the same format as the regular papers. Submissions are about early-stage works and position papers that are still in progress, for authors to showcase their preliminary ideas to get early-stage feedback at the workshop. The authors are expected to present their work in the form of a lighting talk and/or poster during the workshop. Authors of accepted submissions will have the option to opt out from including the submissions in the workshop proceedings.
Please submit your paper via https://naic26.hotcrp.com/
Submissions must be printable PDF files. When creating your submission, you must use the sigconf proceedings template (two-column format, 10-pt font size) available on the official ACM site. LaTeX submissions should use the acmart.cls template (sigconf option), with the 10-pt font.
The NAIC workshop will feature a best paper award.
| Submission deadline | May 4, 2026 |
|---|---|
| Acceptance notification | June 7, 2026 |
| Camera-ready deadline | June 20, 2026 |
| Workshop date | August 17, 2026 |
| Organizers | Institution |
|---|---|
| Alan Liu | Maryland |
| Maria Apostolaki | Princeton |
| Danyang Zhuo | Duke |
| Name | Institution |
|---|---|
| Haseeb Ashfaq | NYU |
| Ran Ben Basat | UCL |
| Qizhe Cai | UVa |
| Xiaoqi Chen | Purdue |
| Kuntai Du | TensorMesh |
| Soudeh Ghorbani | JHU |
| Prankur Gupta | Meta |
| Marios Kogias | Imperial |
| Ming Liu | Wisconsin |
| Harsha Madhyastha | USC |
| Amedeo Sapio | NVIDIA |
| Wenfei Wu | PKU |
| Weitao Wang | |
| Jiarong Xing | Rice |
| Qiao Xiang | Xiamen U. |
| Annus Zulfiqar | UMich |
| Hong Zhang | Waterloo |
| Junxue Zhang | USTC |
| Yang Zhou | UC Davis |
| Name | Institution |
|---|---|
| Theophilus A. Benson | CMU |
| Torsten Hoefler | ETH Zurich |
| TV Lakshman | Nokia Bell Labs |
| Haoyu Song | Futurewei |
| Ying Zhang | Meta |
| Zhi-li Zhang | Minnesota |