openmlsys / openmlsys-enLinks
《Machine Learning Systems: Design and Implementation》- English Version
☆25Updated 5 months ago
Alternatives and similar repositories for openmlsys-en
Users that are interested in openmlsys-en are comparing it to the libraries listed below
Sorting:
- ☆170Updated last year
- A minimal implementation of vllm.☆44Updated 11 months ago
- ☆36Updated last year
- A curated list of awesome projects and papers for distributed training or inference☆239Updated 8 months ago
- ☆86Updated 3 months ago
- Examples and exercises from the book Programming Massively Parallel Processors - A Hands-on Approach. David B. Kirk and Wen-mei W. Hwu (T…☆69Updated 4 years ago
- ☆207Updated 7 months ago
- Cataloging released Triton kernels.☆238Updated 5 months ago
- 📑 Dive into Big Model Training☆114Updated 2 years ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 4 months ago
- ⚡️FFPA: Extend FlashAttention-2 with Split-D, achieve ~O(1) SRAM complexity for large headdim, 1.8x~3x↑ vs SDPA.☆186Updated last month
- ☆105Updated 10 months ago
- ☆114Updated 3 weeks ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆38Updated 2 weeks ago
- Distributed training (multi-node) of a Transformer model☆72Updated last year
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆90Updated 2 months ago
- Collection of kernels written in Triton language☆132Updated 2 months ago
- CUDA Matrix Multiplication Optimization☆196Updated 11 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆214Updated last year
- ring-attention experiments☆144Updated 8 months ago
- Implement Flash Attention using Cute.☆87Updated 6 months ago
- ☆212Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆100Updated 3 weeks ago
- ☆174Updated 5 months ago
- ☆219Updated last week
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- paper and its code for AI System☆311Updated 2 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆183Updated 4 months ago
- Code base and slides for ECE408:Applied Parallel Programming On GPU.☆124Updated 3 years ago