The official repository of Quamba1 [ICLR 2025] & Quamba2 [ICML 2025]
☆68Jun 19, 2025Updated 9 months ago
Alternatives and similar repositories for Quamba
Users that are interested in Quamba are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- xKV: Cross-Layer SVD for KV-Cache Compression☆45Nov 30, 2025Updated 3 months ago
- ☆31May 29, 2025Updated 9 months ago
- ptq4vm official repository☆27Apr 7, 2025Updated 11 months ago
- ☆18Jul 31, 2025Updated 7 months ago
- Code for "Theoretical Foundations of Deep Selective State-Space Models" (NeurIPS 2024)☆15Jan 7, 2025Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Official mirror of torchjpeg. Please do not open PRs here, they will be ignored. Go to the gitlab repository to contribute.☆22Jun 21, 2023Updated 2 years ago
- ☆15May 30, 2024Updated last year
- Open-sourcing code associated with the AAAI-25 paper "On the Expressiveness and Length Generalization of Selective State-Space Models on …☆16Sep 18, 2025Updated 6 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- ☆20Dec 5, 2024Updated last year
- ☆77Feb 5, 2026Updated last month
- ☆28Nov 28, 2025Updated 3 months ago
- A lightweight triton-based General Matrix Multiplication (GEMM) library.☆55Mar 19, 2026Updated last week
- ☆15Apr 11, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆20Dec 3, 2025Updated 3 months ago
- FSA: Fusing FlashAttention within a Single Systolic Array☆96Mar 2, 2026Updated 3 weeks ago
- On-the-fly Definition Augmentation of LLMs for Biomedical NER☆14Apr 14, 2025Updated 11 months ago
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆32Apr 9, 2025Updated 11 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆155Feb 20, 2025Updated last year
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Mar 31, 2025Updated 11 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆348Jun 18, 2025Updated 9 months ago
- Official implementation of ECCV24 paper: POA☆24Aug 8, 2024Updated last year
- Pytorch code for paper QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models☆25Sep 27, 2023Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆68Jul 8, 2025Updated 8 months ago
- ☆35Mar 12, 2025Updated last year
- ☆63Jul 21, 2024Updated last year
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆154Mar 21, 2025Updated last year
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- This is the implementation of Cross-attention inspired Mamba.☆40Apr 5, 2025Updated 11 months ago
- ☆55Nov 22, 2024Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Jul 24, 2025Updated 8 months ago
- DPO, but faster 🚀☆49Dec 6, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Code for the paper “Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling”☆140Mar 7, 2026Updated 2 weeks ago
- ☆25Dec 11, 2021Updated 4 years ago
- ☆32Mar 31, 2025Updated 11 months ago
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆37Apr 2, 2025Updated 11 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆753Aug 6, 2025Updated 7 months ago
- All-in-one repository for Fine-tuning & Pretraining (Large) Language Models☆15Mar 8, 2023Updated 3 years ago
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Jan 24, 2025Updated last year