itsdaniele / speculative_mambaLinks
☆15Updated 11 months ago
Alternatives and similar repositories for speculative_mamba
Users that are interested in speculative_mamba are comparing it to the libraries listed below
Sorting:
- ☆17Updated last year
- KV cache compression via sparse coding☆14Updated last week
- Fast and memory-efficient exact attention☆72Updated 8 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆28Updated 8 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆80Updated last year
- QuTLASS: CUTLASS-Powered Quantized BLAS for Deep Learning☆125Updated last week
- Official code for the paper "HEXA-MoE: Efficient and Heterogeneous-Aware MoE Acceleration with Zero Computation Redundancy"☆13Updated 8 months ago
- ☆146Updated 8 months ago
- EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation☆25Updated 3 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆29Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆102Updated 3 weeks ago
- Work in progress.☆74Updated 4 months ago
- ☆36Updated last month
- Code for studying the super weight in LLM☆119Updated 11 months ago
- ☆29Updated 11 months ago
- Transformers components but in Triton☆34Updated 5 months ago
- ☆18Updated 8 months ago
- ☆46Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆110Updated last year
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆57Updated 4 months ago
- The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]☆59Updated 4 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆54Updated 11 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆92Updated 3 months ago
- ☆83Updated last year
- Muon fsdp 2☆44Updated 3 months ago
- LLM Inference with Microscaling Format☆32Updated 11 months ago
- [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆148Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 11 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆124Updated 4 months ago
- [ICML 2024] Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆37Updated 9 months ago