mlfoundations / open_lm
A repository for research on medium sized language models.
☆484Updated this week
Alternatives and similar repositories for open_lm:
Users that are interested in open_lm are comparing it to the libraries listed below
- Scaling Data-Constrained Language Models☆330Updated 3 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆297Updated last year
- ☆484Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆785Updated 2 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆687Updated 3 months ago
- RewardBench: the first evaluation tool for reward models.☆491Updated last week
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 5 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆262Updated 6 months ago
- Large Context Attention☆670Updated 5 months ago
- Scalable toolkit for efficient model alignment☆674Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆492Updated 2 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆253Updated last year
- batched loras☆336Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆247Updated 6 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆578Updated 10 months ago
- Inference code for Persimmon-8B☆416Updated last year
- ☆493Updated 4 months ago
- A bagel, with everything.☆315Updated 9 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆831Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆382Updated 9 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆447Updated 9 months ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆377Updated 6 months ago
- Official repository for ORPO☆430Updated 7 months ago
- Minimalistic large language model 3D-parallelism training☆1,386Updated this week
- distributed trainer for LLMs☆555Updated 7 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆538Updated 10 months ago
- DSIR large-scale data selection framework for language model training☆242Updated 9 months ago
- A bibliography and survey of the papers surrounding o1☆1,042Updated 2 months ago