Program

Keynotes

Keynote #1

Towards Combinatorial Interpretability of Neural Computation

Nir Shavit (MIT)

Slack channel

We introduce combinatorial interpretability, a methodology for understanding neural computation by analyzing the combinatorial structures in the sign-based categorization of a network’s weights and biases. We demonstrate its power through feature channel coding, a theory that explains how neural networks compute Boolean expressions and potentially underlies other categories of neural network computation. According to this theory, features are computed via feature channels: unique cross-neuron encodings shared among the inputs the feature operates on. Because different feature channels share neurons, the neurons are polysemantic and the channels interfere with one another, making the computation appear inscrutable.

We show how to decipher these computations by analyzing a network’s feature channel coding, offering complete mechanistic interpretations of several small neural networks that were trained with gradient descent. Crucially, this is achieved via static combinatorial analysis of the weight matrices, without examining activations or training new autoencoding networks. Feature channel coding reframes the superposition hypothesis, shifting the focus from neuron activation directionality in high-dimensional space to the combinatorial structure of codes. It also allows us for the first time to exactly quantify and explain the relationship between a network’s parameter size and its computational capacity (i.e. the set of features it can compute with low error), a relationship that is implicitly at the core of many modern scaling laws. This work is joint with Micah Adler and Dan Alistarh.

Nir Shavit received B.Sc. and M.Sc. degrees in Computer Science from the Technion – Israel Institute of Technology in 1984 and 1986, and a Ph.D. in Computer Science from the Hebrew University of Jerusalem in 1990. Shavit is a co-author of the book The Art of Multiprocessor Programming. He is a recipient of the 2004 Gödel Prize in theoretical computer science for his work on applying tools from algebraic topology to model shared memory computability and of the 2012 Dijkstra Prize in Distributed Computing for the introduction of Software Transactional Memory.

For many years his main interests were techniques for designing, implementing, and reasoning about multiprocessor algorithms.  These days he is interested in interpretability of neural computation, understanding the relationship between deep learning and how neural tissue computes and is part of an effort to do so by extracting connectivity maps of brain, a field called connectomics.

Keynote #2

Disaggregated Memory and the Revival of Memory Research

Marcos Aguilera

Slack channel

Emerging hardware technologies bring exciting new capabilities and challenges to our research menu. One such technology is disaggregated memory. While its roots date back to the early 1990s, it is only now that this technology is getting rolled out. Disaggregated memory allows servers in a data center to share a memory that is externally connected. This form of sharing differs conceptually from traditional shared memory in many ways: performance, fault model, coherence, and the ability to communicate with other mechanisms. These differences open up new applications and research questions on how to best use this memory effectively. In this talk, we explore some recent and ongoing work in this area.

Marcos K. Aguilera is a researcher and engineer whose technical interests span all aspects of distributed systems, including both theory and practice. He has worked at Compaq SRC, HP Labs, MSR Silicon Valley, VMware Research Group, and Broadcom. He has served as program chair for OSDI, SoCC, FAST, DISC, OPODIS, and ICDCN and will serve as program chair for NSDI. He received an MS and PhD in Computer Science from Cornell University, and a BE in Computer Science from Universidade Estadual de Campinas in Brazil.

Keynote #3

Scaling LSM-Trees for the AI Age and Beyond

Niv Dayan (University of Toronto)

Log-Structured Merge-trees (LSM-trees) underpin large-scale storage systems such as Bigtable, DynamoDB, and RocksDB, and are foundational to ML and AI infrastructure including feature stores, recommendation engines, and vector databases. While their write-optimized design is well-suited to write-intensive workloads, scaling both read and write performance simultaneously remains a central challenge.

This talk revisits core LSM-tree design principles and presents recent advances that upend longstanding trade-offs between read and write efficiency. We discuss the use of range filters as a replacement for traditional Bloom filters and introduce techniques for optimizing filter space allocation across levels to reduce false positives and compaction costs simultaneously. These techniques unlock new regimes of performance, enabling LSM-based systems to scale more gracefully under modern demands.

Niv Dayan is an Assistant Professor at the University of Toronto (U of T). His research focuses on designing and analyzing data structures for database and storage systems. Before joining U of T, he was a Research Scientist at Pliops and a Technical Advisor to Speedb. He received his Ph.D. from the IT University of Copenhagen and was a postdoctoral researcher at Harvard.

Full Program

Sessions:

 

Session 1: Accelerated Systems

Chair: Binoy Ravindran (Virginia Tech)
Slack channel

  • Keep Your Friends Close: Leveraging Affinity Groups to Accelerate AI Inference Workflows
    Thiago Garrett (University of Oslo); Weijia Song (Cornell University); Roman Vitenberg (University of Oslo); Ken Birman (Cornell University)
  • LATTICE: Efficient In-Memory DNN Model Versioning
    Manoj Saha, Ashikee Ghosh, Raju Rangaswami, Yanzhao Wu, Janki Bhimani (Florida International University)
  • DPUF: DPU-accelerated Near-storage Secure Filtering
    Narangerelt Batsoyol (UC San Diego); Daniel Waddington, Swami Sundararaman (IBM Research); Steven Swanson (UC San Diego)
  • SAKER: A Software Accelerated Key-Value Service via the NVMe Interface (Short)
    Chen Zhong (University of texas at arlington); Wenguang Wang (VMware by Broadcom); Song Jiang (University of Texas at Arlington)

 

Session 2: Memory Management

Chair: Nadav Amit (Technion – Israel Institute of Technology)
Slack channel

  • Oasis: Persistent Memory Oversubscription for Emerging Applications via Persistent MMAP
    Ziyi Zhao, Scott Rixner (Rice University)
  • Mitigating the Costs of Dynamic Memory Management in SGX
    Vijay Dhanraj (Intel Labs); Harpreet Singh Chawla (Texas A&M University); Tao Zhang, Daniel Manila (The University of North Carolina at Chapel Hill); Eric Schneider (Virginia Tech); Erica Fu (The University of North Carolina at Chapel Hill); Mona Vij (Intel Labs); Chia-Che Tsai (Texas A&M University); Donald Porter (The University of North Carolina at Chapel Hill)
  • Can Hardware Outsmart Software in Tiered Memory Management? A CMM-H Case Study (Short)
    Zhen Lin, Yujie Yang, Lingfeng Xiang (The University of Texas at Arlington); Lianjie Cao, Faraz Ahmed (Hewlett Packard Labs); Jia Rao, Hui Lu (The University of Texas at Arlington); Puneet Sharma (Hewlett Packard Labs)
  • vtism: Efficient Tiered Memory Management for VMs with CXL
    Zhixing Lu, Yang Ou, Lizhou Wu, Zicong Wang, Mingche Lai, Xuran Ge, Zhenlong Song, Min Xie (National University of Defense Technology)

 

Session 3: Storage Systems

Chair: Pramod Bhatotia (TU Munich)
Slack channel

  • TieredKV: A Tiered LSM-Learned Index Design for Superior Performance on Storage
    Wenlong Wang, David Hung-Chang Du (University of Minnesota, Twin Cities)
  • CollapseDB: Exploring Multi-Level Compaction in LSM-Trees to Enhance Write Performance
    Prajwal Challa, Yan Wang, Song Jiang (University of Texas at Arlington)
  • Let It Slide: Online Deduplicated Data Migration
    Shalev Kuba, Gala Yadgar (Technion)
  • Enhanced File System Testing through Input and Output Coverage
    Yifei Liu (Stony Brook University); Geoff Kuenning (Harvey Mudd College); Md. Kamal Parvez, Scott Smolka, Erez Zadok (Stony Brook University)

 

Session 4: Operating Systems

Chair: TBD
Slack channel

  • The Impact of Kernel Asynchronous APIs on the Performance of a Kernel VPN (Short)
    Honore Cesaire Mounah (Inria); Djob Mvondo (Univ Rennes, CNRS, Inria, IRISA); Julia Lawall (Inria); Yérom-David Bromberg (Univ Rennes, CNRS, Inria, IRISA)
  • Practical Whole-System Persistence
    Dustin Nguyen (Friedrich-Alexander-Universität Erlangen-Nürnberg); Oliver Giersch (Brandenburgische Technische UniversitätCottbus-Senftenberg); Thomas Preisner, Jonathan Krebs(Friedrich-Alexander-Universität Erlangen-Nürnberg); Henriette Herzog(Ruhr-Universität Bochum); Timo Hönig (Ruhr Universität Bochum); RüdigerKapitza (Friedrich-Alexander-Universität Erlangen-Nürnberg); Jörg Nolte (Brandenburgische Technische Universität Cottbus-Senftenberg); WolfgangSchröder-Preikschat (Friedrich-Alexander-Universität Erlangen-Nürnberg)
  • DeepErr: Automatic Root-Cause Analysis of System Call Failures 
    Amit Nadav (Technion); Michael Wei (VMware Research)
  • PANGOLIN: a Comprehensive Testing Framework for Configuration-Rich KV Stores
    Shaohua Duan (Washington State University); Sudarsun Kannan (Rutgers University); Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau (University of Wisconsin–Madison)

 

Poster Session: 

Chair: Sarel Cohen (The Academic College of Tel Aviv-Yaffo)
Slack channel

For accepted posters, see