mdy666 / Qwen-Native-Sparse-Attention
qwen-nsa
☆57Updated 2 weeks ago
Alternatives and similar repositories for Qwen-Native-Sparse-Attention:
Users that are interested in Qwen-Native-Sparse-Attention are comparing it to the libraries listed below
- ☆76Updated last week
- ☆121Updated this week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆184Updated last week
- ☆74Updated this week
- Efficient Mixture of Experts for LLM Paper List☆62Updated 4 months ago
- 🔥 A minimal training framework for scaling FLA models☆107Updated last week
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆97Updated this week
- More Tokens, Lower Precision: Towards the Optimal Token-Precision Trade-off in KV Cache Compression☆11Updated 3 months ago
- Efficient triton implementation of Native Sparse Attention.☆139Updated 2 weeks ago
- Code for paper "Patch-Level Training for Large Language Models"☆82Updated 5 months ago
- ☆39Updated last month
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year
- A sparse attention kernel supporting mix sparse patterns☆197Updated 2 months ago
- ☆179Updated last week
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆87Updated 2 months ago
- XAttention: Block Sparse Attention with Antidiagonal Scoring☆140Updated 3 weeks ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆65Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference