allenai / OLMoLinks
Modeling, training, eval, and inference code for OLMo
☆6,197Updated last week
Alternatives and similar repositories for OLMo
Users that are interested in OLMo are comparing it to the libraries listed below
Sorting:
- ☆4,110Updated last year
- PyTorch native post-training library☆5,604Updated last week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,162Updated 3 months ago
- A framework for few-shot evaluation of language models.☆10,776Updated last week
- Tools for merging pretrained large language models.☆6,494Updated last week
- AllenAI's post-training codebase☆3,373Updated this week
- DataComp for Language Models☆1,394Updated 2 months ago
- A PyTorch native platform for training generative AI models☆4,778Updated this week
- Robust recipes to align language models with human and AI preferences☆5,431Updated 2 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,183Updated last year
- The official PyTorch implementation of Google's Gemma models☆5,578Updated 6 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,629Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,362Updated 4 months ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,816Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,749Updated last week
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,008Updated this week
- Training LLMs with QLoRA + FSDP☆1,534Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,667Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,768Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,323Updated this week
- Minimalistic large language model 3D-parallelism training☆2,351Updated last week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,636Updated last year
- ☆2,552Updated last year
- Data and tools for generating and inspecting OLMo pre-training data.☆1,349Updated 3 weeks ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,790Updated last week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,992Updated 7 months ago
- Train transformer language models with reinforcement learning.☆16,473Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,135Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,276Updated 6 months ago
- Run Mixtral-8x7B models in Colab or consumer desktops☆2,324Updated last year