allenai / OLMoLinks
Modeling, training, eval, and inference code for OLMo
☆6,245Updated last month
Alternatives and similar repositories for OLMo
Users that are interested in OLMo are comparing it to the libraries listed below
Sorting:
- PyTorch native post-training library☆5,619Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,169Updated 4 months ago
- AllenAI's post-training codebase☆3,456Updated this week
- ☆4,109Updated last year
- A framework for few-shot evaluation of language models.☆10,976Updated this week
- Tools for merging pretrained large language models.☆6,611Updated last week
- A PyTorch native platform for training generative AI models☆4,866Updated this week
- Robust recipes to align language models with human and AI preferences☆5,453Updated 3 months ago
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,027Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,780Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,407Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,634Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,804Updated this week
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,840Updated last year
- Minimalistic large language model 3D-parallelism training☆2,381Updated last week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,160Updated last year
- ☆2,552Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,672Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,363Updated last month
- The official PyTorch implementation of Google's Gemma models☆5,588Updated 6 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,642Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,007Updated 8 months ago
- Go ahead and axolotl questions☆10,974Updated this week
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,995Updated this week
- Training LLMs with QLoRA + FSDP☆1,534Updated last year
- Train transformer language models with reinforcement learning.☆16,722Updated this week
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,795Updated last year
- ☆4,233Updated 4 months ago
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,034Updated this week
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,327Updated last year