SophiaLi06 / BytePS_THCLinks
THC: Accelerating Distributed Deep Learning Using Tensor Homomorphic Compression
☆19Updated last year
Alternatives and similar repositories for BytePS_THC
Users that are interested in BytePS_THC are comparing it to the libraries listed below
Sorting:
- Hi-Speed DNN Training with Espresso: Unleashing the Full Potential of Gradient Compression with Near-Optimal Usage Strategies (EuroSys '2…☆15Updated last year
- [TBD] "m4: A Learned Flow-level Network Simulator" by Chenning Li, Anton A. Zabreyko, Arash Nasr-Esfahany, Kevin Zhao, Prateesh Goyal, Mo…☆15Updated this week
- Managed collective communication service☆22Updated 11 months ago
- ☆55Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆35Updated 2 years ago
- ☆37Updated 11 months ago
- ☆17Updated last year
- Artifacts for our SIGCOMM'22 paper Muri☆41Updated last year
- [NSDI 2023] TopoOpt: Optimizing the Network Topology for Distributed DNN Training☆32Updated 10 months ago
- Cupcake: A Compression Scheduler for Scalable Communication-Efficient Distributed Training (MLSys '23)☆9Updated 2 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆13Updated last year
- [ACM SIGCOMM 2024] "m3: Accurate Flow-Level Performance Estimation using Machine Learning" by Chenning Li, Arash Nasr-Esfahany, Kevin Zha…☆24Updated 10 months ago
- ☆81Updated 3 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆59Updated 8 months ago
- GPU-accelerated LLM Training Simulator☆35Updated last month
- ☆44Updated last year
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- A Hybrid Framework to Build High-performance Adaptive Neural Networks for Kernel Datapath☆27Updated 2 years ago
- ☆51Updated 2 years ago
- Artifact for "Shockwave: Fair and Efficient Cluster Scheduling for Dynamic Adaptation in Machine Learning" [NSDI '23]☆44Updated 2 years ago
- Aequitas enables RPC-level QoS in datacenter networks.☆16Updated 3 years ago
- Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k c…☆26Updated 2 years ago
- LLM serving cluster simulator☆108Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆25Updated 8 months ago
- The prototype for NSDI paper "NetHint: White-Box Networking for Multi-Tenant Data Centers"☆26Updated last year
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆83Updated 2 years ago
- A resilient distributed training framework☆95Updated last year
- ☆23Updated last year
- ☆69Updated 2 years ago
- ☆10Updated last month