foundation-model-stack / bambaLinks
Train, tune, and infer Bamba model
☆127Updated last month
Alternatives and similar repositories for bamba
Users that are interested in bamba are comparing it to the libraries listed below
Sorting:
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆69Updated 2 weeks ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆60Updated this week
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated last year
- A repository for research on medium sized language models.☆76Updated last year
- ☆44Updated last year
- Maya: An Instruction Finetuned Multilingual Multimodal Model using Aya☆111Updated 2 weeks ago
- ☆47Updated 9 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆98Updated last month
- ☆46Updated last week
- Data preparation code for Amber 7B LLM☆90Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆53Updated 4 months ago
- LM engine is a library for pretraining/finetuning LLMs☆55Updated this week
- ☆55Updated 3 weeks ago
- Train your own SOTA deductive reasoning model☆92Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆181Updated 4 months ago
- Load compute kernels from the Hub☆139Updated this week
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 6 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆100Updated 2 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 7 months ago
- ☆58Updated 2 weeks ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆67Updated 2 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆28Updated 8 months ago
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆105Updated this week
- Docker image NVIDIA GH200 machines - optimized for vllm serving and hf trainer finetuning☆42Updated 3 months ago
- ☆72Updated last month
- ☆30Updated last month
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated last month
- Source code for the collaborative reasoner research project at Meta FAIR.☆87Updated last month
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆24Updated last month