This paper presents SDP, a protocol design for emerging datacenter transports, such as NDP and Homa, to integrate data encryption. It supports NIC offloading designed for TLS over TCP, native protocol number alongside TCP and UDP, and message-based abstraction that enables low latency RPCs with fine-grained parallelism.
@inproceedings{10.1145/3718958.3750482,author={Gao, Tianyi and Ma, Xinshu and Narreddy, Suhas and Luo, Eugenio and Chien, Steven and Honda, Michio},title={Designing Transport-Level Encryption for Datacenter Networks},year={2025},isbn={9798400715242},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3718958.3750482},doi={10.1145/3718958.3750482},booktitle={Proceedings of the ACM SIGCOMM 2025 Conference},pages={1248–1250},numpages={3},location={S\~{a}o Francisco Convent, Coimbra, Portugal},series={SIGCOMM '25},}
This paper presents SDP, a protocol design for emerging datacenter transports, such as NDP and Homa, to integrate data encryption. SDP enables a new design point of transport-level encryption that supports an existing NIC offloading designed for TLS over TCP, native transport protocol number alongside TCP and UDP, and message-based abstraction that enables low latency RPCs, various in-network compute, and host stack load balancing.
@inproceedings{10.1145/3735358.3735389,author={Gao, Tianyi and Ma, Xinshu and Narreddy, Suhas and Luo, Eugenio and Chien, Steven W. D. and Honda, Michio},title={Designing Transport-Level Encryption for Datacenter Networks},year={2025},isbn={9798400714016},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3735358.3735389},doi={10.1145/3735358.3735389},booktitle={Proceedings of the 9th Asia-Pacific Workshop on Networking},pages={142–149},numpages={8},location={Shanghai, China},series={APNET '25},}
Efficient resource utilization in server clusters is essential for maximizing service capacity and minimizing latency while reducing infrastructure costs, whether in edge clouds or hyperscale deployments. Current approaches face significant limitations: layer 4 load balancing (L4LB) alone causes load imbalance over time with long-lived connections, while layer 7 load balancing (L7LB) introduces substantial CPU, memory, and network overhead despite enabling fine-grained server selection based on application-level requests.This paper presents XO, a set of concept and techniques to enable a TCP server to offload entire TCP connection and application-request processing to another machine at request granularity. Together with L4LB, XO achieves L7LB-level load distribution without the associated overheads.
@inproceedings{10.1145/3735358.3735377,author={Li, Shuo and Chien, Steven W.D. and Gao, Tianyi and Honda, Michio},title={Remote TCP Connection Offload with XO},year={2025},isbn={9798400714016},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3735358.3735377},doi={10.1145/3735358.3735377},booktitle={Proceedings of the 9th Asia-Pacific Workshop on Networking},pages={37–43},numpages={7},series={APNET '25},}