PyTorch implementation of models from the Zamba2 series.
☆193Jan 23, 2025Updated last year
Alternatives and similar repositories for Zamba2
Users that are interested in Zamba2 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆52Feb 5, 2025Updated last year
- ☆59Jul 9, 2024Updated last year
- Here we will test various linear attention designs.☆62Apr 25, 2024Updated 2 years ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆958Nov 16, 2025Updated 5 months ago
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆35Aug 14, 2024Updated last year
- ☆11Oct 11, 2023Updated 2 years ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆57Dec 4, 2024Updated last year
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆171Jan 30, 2025Updated last year
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆32Apr 9, 2025Updated last year
- ☆13Dec 15, 2025Updated 4 months ago
- ☆52Jan 28, 2024Updated 2 years ago
- GoldFinch and other hybrid transformer components☆46Jul 20, 2024Updated last year
- Code repository for Black Mamba☆263Feb 8, 2024Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆68Apr 24, 2024Updated 2 years ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆558Mar 13, 2026Updated last month
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆82Dec 18, 2024Updated last year
- RWKV-X is a Linear Complexity Hybrid Language Model based on the RWKV architecture, integrating Sparse Attention to improve the model's l…☆57Mar 31, 2026Updated last month
- A repository for research on medium sized language models.☆78May 23, 2024Updated last year
- Engineering the state of RNN language models (Mamba, RWKV, etc.)☆32May 25, 2024Updated last year
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆123Sep 13, 2024Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆205Jul 17, 2024Updated last year
- Implementation of Infini-Transformer in Pytorch☆112Jan 4, 2025Updated last year
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Training hybrid models for dummies.☆29Nov 1, 2025Updated 6 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- Train, tune, and infer Bamba model☆137Jun 4, 2025Updated 11 months ago
- Crawl & visualize ICLR papers and reviews.☆18Nov 5, 2022Updated 3 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Apr 17, 2024Updated 2 years ago
- ☆18Mar 10, 2023Updated 3 years ago
- ☆54Nov 22, 2024Updated last year
- Parallel Associative Scan for Language Models☆18Jan 8, 2024Updated 2 years ago
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Jun 3, 2024Updated last year
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- [ICLR 2025 & COLM 2025] Official PyTorch implementation of the Forgetting Transformer and Adaptive Computation Pruning