mlfoundations / open_lmLinks
A repository for research on medium sized language models.
☆510Updated 3 months ago
Alternatives and similar repositories for open_lm
Users that are interested in open_lm are comparing it to the libraries listed below
Sorting:
- Scaling Data-Constrained Language Models☆341Updated 2 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆310Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆268Updated 4 months ago
- ☆538Updated 9 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated last year
- Inference code for Persimmon-8B☆415Updated 2 years ago
- batched loras☆345Updated 2 years ago
- A bagel, with everything.☆324Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆745Updated 11 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆271Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆537Updated 3 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆231Updated 6 months ago
- ☆565Updated last year
- Official repository for ORPO☆465Updated last year
- An open collection of methodologies to help with successful training of large language models.☆511Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆559Updated 8 months ago
- distributed trainer for LLMs☆580Updated last year
- Large Context Attention☆736Updated 7 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆885Updated 2 months ago
- Fast bare-bones BPE for modern tokenizer training☆164Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆245Updated 10 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆711Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆473Updated last year
- Evaluation suite for LLMs☆359Updated 2 months ago
- A crude RLHF layer on top of nanoGPT with Gumbel-Softmax trick☆293Updated last year
- DSIR large-scale data selection framework for language model training☆258Updated last year
- PyTorch building blocks for the OLMo ecosystem☆286Updated this week
- ☆195Updated last week
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆463Updated last year