The 3rd Workshop on Networks for AI Computing (NAIC)

Call for Papers

Generative AI is transforming many aspects of modern society with content ranging from text and image to videos. The Large Language Models (LLMs) and other Artificial Intelligence (AI)/Machine Learning (ML) that enable these generative AI capabilities are placing an unprecedented amount of pressure on modern data centers with anecdotal evidence suggesting that the largest model can take months to train. To support these models, modern distributed training clusters contain tens of thousands of GPUs/TPUs, with many expecting the scale to further increase significantly.


More fundamentally, training these large models introduces network communication patterns that require sophisticated and novel topology, routing, and synchronization. As the adoption and use of such models grows, the data generated and the data required to train and make inferences will place emphasis on the design of novel network primitives. The scale, workload, and performance requirements require us to reconsider every layer of the network stacks and scrutinize the solution from a holistic perspective. The recent industry initiative, Ultra Ethernet Consortium (UEC), is actively working on Ethernet-based network optimizations for AI and HPC workloads. The Open Compute Project (OCP) is geared more toward infrastructure support for AI computing. The standard organizations (e.g., IETF) are also seeking opportunities in networking for AI computing. We believe the networking research community should take a bolder position and bring cutting-edge innovations in this front as well.


The workshop aims to bring together researchers and experts from academia and industry to share the latest research, trends, and challenges in cloud and data center networks for AI computing. We expect it to enrich our understanding of the AI workloads, communication patterns, and their impacts on networks, and help the community to identify future research directions. We encourage lively debate on issues like convergence vs disaggregation, front-end vs back-end, smart edges vs programmable core, and the need for new interconnection, new topology, new transport, and new routing algorithms and protocols.


Topics of Interest

Topics of interest include, but are not limited to:



Submission Instructions

We invite researchers and practitioners to submit original research papers, including position papers on disruptive ideas, and early-stage work with a potential for full papers in the future.


Reviewing will be double-blind. Authors must make a good faith effort to anonymize their submissions. Papers must not include author names and affiliations, and avoid implicitly disclosing the authors’ identity (e.g., via self-citation, funding acknowledgments).


We accept two types of submissions:



Please submit your paper via https://naic26.hotcrp.com/


Formatting

Submissions must be printable PDF files. When creating your submission, you must use the sigconf proceedings template (two-column format, 10-pt font size) available on the official ACM site. LaTeX submissions should use the acmart.cls template (sigconf option), with the 10-pt font.


Best Paper Award

The NAIC workshop will feature a best paper award.


Important Dates

Submission deadlineMay 4, 2026
Acceptance notificationJune 7, 2026
Camera-ready deadlineJune 20, 2026
Workshop dateAugust 17, 2026

Organizers

OrganizersInstitution
Alan LiuMaryland
Maria ApostolakiPrinceton
Danyang ZhuoDuke

Program Committee

NameInstitution
Haseeb AshfaqNYU
Ran Ben BasatUCL
Qizhe CaiUVa
Xiaoqi ChenPurdue
Kuntai DuTensorMesh
Soudeh GhorbaniJHU
Prankur GuptaMeta
Marios KogiasImperial
Ming LiuWisconsin
Harsha MadhyasthaUSC
Amedeo SapioNVIDIA
Wenfei WuPKU
Weitao WangGoogle
Jiarong XingRice
Qiao XiangXiamen U.
Annus ZulfiqarUMich
Hong ZhangWaterloo
Junxue ZhangUSTC
Yang ZhouUC Davis

Steering Committee

NameInstitution
Theophilus A. BensonCMU
Torsten HoeflerETH Zurich
TV LakshmanNokia Bell Labs
Haoyu SongFuturewei
Ying ZhangMeta
Zhi-li ZhangMinnesota