dhcode-cpp / online-softmaxLinks
simplest online-softmax notebook for explain Flash Attention
☆13Updated 9 months ago
Alternatives and similar repositories for online-softmax
Users that are interested in online-softmax are comparing it to the libraries listed below
Sorting:
- ☆52Updated 2 years ago
- Pipeline-Parallel Lecture: Simplest Dualpipe Implementation.☆27Updated last month
- Inference code for LLaMA models☆125Updated 2 years ago
- Implementation of FlashAttention in PyTorch☆171Updated 9 months ago
- qwen-nsa☆78Updated 6 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated 2 years ago
- A MoE impl for PyTorch, [ATC'23] SmartMoE☆71Updated 2 years ago
- ☆79Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- ☆33Updated 7 months ago
- ☆84Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆98Updated last year
- export llama to onnx☆136Updated 9 months ago
- ☆147Updated 3 months ago
- Tiny-DeepSpeed, a minimalistic re-implementation of the DeepSpeed library☆47Updated 2 months ago
- 青稞Talk☆150Updated last week
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- lightweighted deep learning inference service framework☆40Updated 4 years ago
- 模型压缩的小白入门教程☆22Updated last year
- Efficient Mixture of Experts for LLM Paper List☆136Updated 3 weeks ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆122Updated last year
- ☆115Updated 11 months ago
- DeepSpeed教程 & 示例注释 & 学习笔记 (大模型高效训练)☆178Updated 2 years ago
- DeepSeek Native Sparse Attention pytorch implementation☆103Updated last week
- ☆44Updated last month
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- Models and examples built with OneFlow☆100Updated last year
- from MHA, MQA, GQA to MLA by 苏剑林, with code☆29Updated 8 months ago
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆178Updated this week
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago