NVlabs / hymba
☆167Updated 2 months ago
Alternatives and similar repositories for hymba:
Users that are interested in hymba are comparing it to the libraries listed below
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆272Updated last week
- PyTorch implementation of models from the Zamba2 series.☆176Updated 3 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆297Updated 2 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆196Updated 3 weeks ago
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆215Updated 3 weeks ago
- Efficient LLM Inference over Long Sequences☆357Updated this week
- A family of compressed models obtained via pruning and knowledge distillation☆321Updated 3 months ago
- Normalized Transformer (nGPT)☆152Updated 3 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆149Updated 2 months ago
- ☆181Updated this week
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆191Updated 7 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆157Updated 3 weeks ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆160Updated 2 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆195Updated last week
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆271Updated 3 months ago
- ☆350Updated 3 weeks ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆156Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆145Updated 8 months ago
- ☆192Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆221Updated this week
- ☆135Updated last week
- LLM KV cache compression made easy☆397Updated this week
- Some preliminary explorations of Mamba's context scaling.☆213Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆167Updated last month
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆217Updated 9 months ago
- ☆253Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆95Updated 3 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆194Updated last week
- A project to improve skills of large language models☆248Updated this week