calculon-ai / calculonLinks
☆154Updated last year
Alternatives and similar repositories for calculon
Users that are interested in calculon are comparing it to the libraries listed below
Sorting:
- LLM serving cluster simulator☆116Updated last year
- Synthesizer for optimal collective communication algorithms☆118Updated last year
- ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale☆450Updated this week
- Repository for MLCommons Chakra schema and tools☆131Updated last month
- ☆83Updated 2 years ago
- ☆53Updated 4 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Updated 3 years ago
- DeepSeek-V3/R1 inference performance simulator☆170Updated 6 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- Microsoft Collective Communication Library☆367Updated 2 years ago
- LLM Inference analyzer for different hardware platforms☆94Updated 3 months ago
- ☆194Updated last year
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆27Updated 4 months ago
- ☆130Updated this week
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆88Updated 2 years ago
- Automatic Mapping Generation, Verification, and Exploration for ISA-based Spatial Accelerators☆115Updated 2 years ago
- This repository is established to store personal notes and annotated papers during daily research.☆155Updated 3 weeks ago
- ☆90Updated 6 months ago
- TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches☆76Updated 2 years ago
- AI and Memory Wall☆219Updated last year
- ☆24Updated 3 years ago
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆30Updated 3 years ago
- REEF is a GPU-accelerated DNN inference serving system that enables instant kernel preemption and biased concurrent execution in GPU sche…☆101Updated 2 years ago
- Artifact of OSDI '24 paper, ”Llumnix: Dynamic Scheduling for Large Language Model Serving“☆62Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆144Updated 3 months ago
- A ChatGPT(GPT-3.5) & GPT-4 Workload Trace to Optimize LLM Serving Systems☆213Updated 3 months ago
- ☆109Updated last year
- Curated collection of papers in machine learning systems☆433Updated 3 weeks ago
- An interference-aware scheduler for fine-grained GPU sharing☆150Updated 9 months ago
- ☆39Updated last year