InternLM / Awesome-LLM-Training-System
☆31Updated 7 months ago
Alternatives and similar repositories for Awesome-LLM-Training-System:
Users that are interested in Awesome-LLM-Training-System are comparing it to the libraries listed below
- Implement Flash Attention using Cute.☆74Updated 3 months ago
- ☆52Updated 11 months ago
- ☆125Updated 3 weeks ago
- A sparse attention kernel supporting mix sparse patterns☆168Updated last month
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 7 months ago
- ☆64Updated 3 months ago
- 16-fold memory access reduction with nearly no loss☆81Updated last week
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆151Updated 8 months ago
- ☆88Updated 6 months ago
- DeeperGEMM: crazy optimized version☆61Updated last week
- ☆74Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆74Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆260Updated 4 months ago
- ☆90Updated 4 months ago
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆235Updated 2 weeks ago
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆150Updated 6 months ago
- Implement some method of LLM KV Cache Sparsity☆30Updated 9 months ago
- 📚FFPA(Split-D): Yet another Faster Flash Prefill Attention with O(1) GPU SRAM complexity for headdim > 256, ~2x↑🎉vs SDPA EA.☆154Updated this week
- ☆46Updated 2 months ago
- High performance Transformer implementation in C++.☆109Updated 2 months ago
- Sequence-level 1F1B schedule for LLMs.☆17Updated 9 months ago
- nnScaler: Compiling DNN models for Parallel Training☆103Updated last month
- ☆81Updated 2 years ago
- Curated collection of papers in MoE model inference☆110Updated last month
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆62Updated 3 weeks ago
- A simple calculation for LLM MFU.☆27Updated 3 weeks ago
- PyTorch bindings for CUTLASS grouped GEMM.☆108Updated 2 months ago
- [NeurIPS 2024] Efficient LLM Scheduling by Learning to Rank☆42Updated 4 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆62Updated this week
- Quantized Attention on GPU☆45Updated 4 months ago