mlfoundations / open_lmLinks
A repository for research on medium sized language models.
☆506Updated last month
Alternatives and similar repositories for open_lm
Users that are interested in open_lm are comparing it to the libraries listed below
Sorting:
- Scaling Data-Constrained Language Models☆338Updated last month
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆306Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated 11 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 2 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆268Updated last year
- ☆529Updated 8 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆532Updated 2 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆739Updated 10 months ago
- ☆556Updated 11 months ago
- Large Context Attention☆719Updated 6 months ago
- Official repository for ORPO☆461Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆555Updated 7 months ago
- Inference code for Persimmon-8B☆415Updated last year
- batched loras☆344Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆873Updated 3 weeks ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆467Updated last year
- PyTorch building blocks for the OLMo ecosystem☆269Updated this week
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- A bagel, with everything.☆323Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆208Updated 2 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆428Updated last year
- Fast bare-bones BPE for modern tokenizer training☆160Updated last month
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆458Updated last year
- Scalable toolkit for efficient model alignment☆833Updated this week
- distributed trainer for LLMs☆578Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- ☆206Updated 5 months ago
- Annotated version of the Mamba paper☆487Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆626Updated last year