foundation-model-stack / bamba
Train, tune, and infer Bamba model
☆71Updated 3 weeks ago
Alternatives and similar repositories for bamba:
Users that are interested in bamba are comparing it to the libraries listed below
- A repository for research on medium sized language models.☆76Updated 7 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆33Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆52Updated 4 months ago
- ☆68Updated 4 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆22Updated 3 months ago
- Collection of autoregressive model implementation☆76Updated this week
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- ☆54Updated 2 months ago
- ☆40Updated 11 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models"☆57Updated 2 months ago
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆50Updated 9 months ago
- ☆31Updated 6 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated last month
- ☆48Updated 4 months ago
- ☆47Updated 4 months ago
- Using FlexAttention to compute attention with different masking patterns☆40Updated 3 months ago
- ☆46Updated 2 months ago
- ☆11Updated last week
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆42Updated 5 months ago
- ☆70Updated 3 months ago
- DPO, but faster 🚀☆29Updated last month
- ☆21Updated this week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 3 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 4 months ago
- GoldFinch and other hybrid transformer components☆42Updated 5 months ago
- ☆25Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated 11 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆36Updated 2 months ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆98Updated this week