dhcode-cpp / NSA-pytorch
DeepSeek Native Sparse Attention pytorch implementation
☆54Updated 3 weeks ago
Alternatives and similar repositories for NSA-pytorch:
Users that are interested in NSA-pytorch are comparing it to the libraries listed below
- ☆108Updated this week
- ☆143Updated 2 weeks ago
- qwen-nsa☆44Updated last week
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆89Updated 2 weeks ago
- TransMLA: Multi-Head Latent Attention Is All You Need☆221Updated 3 weeks ago
- A sparse attention kernel supporting mix sparse patterns☆169Updated last month
- ☆32Updated 7 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆168Updated last month
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆66Updated this week
- Triton Documentation in Chinese Simplified / Triton 中文文档☆62Updated 2 months ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆274Updated last month
- [ICLR 2025] PEARL: Parallel Speculative Decoding with Adaptive Draft Length☆62Updated 2 weeks ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆52Updated last month
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆122Updated 3 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆118Updated this week
- Awesome list for LLM quantization☆190Updated 3 months ago
- ☆125Updated 3 weeks ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆166Updated last week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆157Updated 8 months ago
- Efficient triton implementation of Native Sparse Attention.☆127Updated this week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆130Updated 9 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆109Updated 10 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆131Updated last month
- 🔥 A minimal training framework for scaling FLA models☆82Updated last week
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆68Updated 2 months ago
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆127Updated last week
- ☆72Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆458Updated last week
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆63Updated 11 months ago
- ☆233Updated 10 months ago