astra-sim / tacosLinks
TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning
☆26Updated 2 months ago
Alternatives and similar repositories for tacos
Users that are interested in tacos are comparing it to the libraries listed below
Sorting:
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆74Updated 2 years ago
- Repository for MLCommons Chakra schema and tools☆39Updated last year
- ☆52Updated 2 months ago
- Repository for MLCommons Chakra schema and tools☆125Updated last month
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆54Updated last year
- LLM Inference analyzer for different hardware platforms☆87Updated 2 months ago
- Synthesizer for optimal collective communication algorithms☆116Updated last year
- ☆81Updated 2 years ago
- LLM serving cluster simulator☆109Updated last year
- ☆147Updated last year
- ☆24Updated 3 years ago
- Microsoft Collective Communication Library☆66Updated 9 months ago
- ☆27Updated 5 years ago
- Open-source implementation for "Helix: Serving Large Language Models over Heterogeneous GPUs and Network via Max-Flow"☆64Updated 9 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆137Updated last month
- DeepSeek-V3/R1 inference performance simulator☆165Updated 5 months ago
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆421Updated last week
- NCCL Profiling Kit☆143Updated last year
- Artifacts for our ASPLOS'23 paper ElasticFlow☆52Updated last year
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- ☆84Updated 5 months ago
- WaferLLM: Large Language Model Inference at Wafer Scale☆49Updated last month
- ☆50Updated 6 years ago
- Proteus: A High-Throughput Inference-Serving System with Accuracy Scaling☆13Updated last year
- ☆23Updated last year
- GVProf: A Value Profiler for GPU-based Clusters☆51Updated last year
- ☆181Updated last year
- NCCL Examples from Official NVIDIA NCCL Developer Guide.☆18Updated 7 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆100Updated 2 years ago
- ☆55Updated 3 months ago