dhcode-cpp / online-softmax
simplest online-softmax notebook for explain Flash Attention
☆10Updated 4 months ago
Alternatives and similar repositories for online-softmax
Users that are interested in online-softmax are comparing it to the libraries listed below
Sorting:
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆47Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆62Updated last year
- ☆79Updated last year
- ☆52Updated last year
- simplify >2GB large onnx model☆56Updated 5 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated last month
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆111Updated last year
- ☆16Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 11 months ago
- Inference code for LLaMA models☆120Updated last year
- Implementation of FlashAttention in PyTorch☆146Updated 4 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆98Updated last month
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆23Updated last year
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- 模型压缩的小白入门教程☆22Updated 10 months ago
- ☆24Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆121Updated 4 months ago
- ☆125Updated 2 weeks ago
- from MHA, MQA, GQA to MLA by 苏剑林, with code☆18Updated 2 months ago
- ☆94Updated 8 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆130Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆56Updated 6 months ago
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆81Updated last month
- ☆11Updated last year
- ☆132Updated 2 months ago
- ☆139Updated last year
- A Tight-fisted Optimizer☆47Updated 2 years ago