skypilot-org / skypilot-catalogLinks
☆26Updated this week
Alternatives and similar repositories for skypilot-catalog
Users that are interested in skypilot-catalog are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- Write a fast kernel and run it on Discord. See how you compare against the best!☆64Updated last week
- Cray-LM unified training and inference stack.☆22Updated 10 months ago
- Easy, Fast, and Scalable Multimodal AI☆78Updated 2 weeks ago
- 👷 Build compute kernels☆192Updated this week
- AI-Driven Research Systems (ADRS)☆81Updated 3 weeks ago
- 🏙 Interactive performance profiling and debugging tool for PyTorch neural networks.☆64Updated 10 months ago
- A collection of reproducible inference engine benchmarks☆38Updated 7 months ago
- Ship correct and fast LLM kernels to PyTorch☆126Updated this week
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IP☆141Updated 3 months ago
- Google TPU optimizations for transformers models☆124Updated 10 months ago
- PyTorch centric eager mode debugger☆48Updated 11 months ago
- Pytorch Distributed native training library for LLMs/VLMs with OOTB Hugging Face support☆202Updated this week
- ☆48Updated last year
- ML/DL Math and Method notes☆64Updated 2 years ago
- Load compute kernels from the Hub☆348Updated this week
- Memory optimized Mixture of Experts☆69Updated 4 months ago
- Tutorial to get started with SkyPilot!☆58Updated last year
- Official code for "SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient"☆147Updated 2 years ago
- ☆219Updated 10 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Make triton easier☆49Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆133Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆257Updated this week
- ring-attention experiments☆160Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆149Updated this week
- Official implementation for Training LLMs with MXFP4☆112Updated 7 months ago