mlfoundations / open_lmLinks
A repository for research on medium sized language models.
☆531Updated 8 months ago
Alternatives and similar repositories for open_lm
Users that are interested in open_lm are comparing it to the libraries listed below
Sorting:
- Scaling Data-Constrained Language Models☆341Updated 7 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- ☆564Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆204Updated last year
- Inference code for Persimmon-8B☆412Updated 2 years ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆260Updated 2 years ago
- batched loras☆349Updated 2 years ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆752Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆280Updated last year
- A bagel, with everything.☆326Updated last year
- ☆593Updated last year
- Official repository for ORPO☆469Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆905Updated 4 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 8 months ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Updated last year
- distributed trainer for LLMs☆588Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆248Updated 11 months ago
- Fast bare-bones BPE for modern tokenizer training☆175Updated 7 months ago
- Scalable toolkit for efficient model alignment☆852Updated 4 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Updated last year
- DSIR large-scale data selection framework for language model training☆269Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆639Updated last year
- Large Context Attention☆766Updated 3 months ago
- Reproducible, flexible LLM evaluations☆337Updated 2 weeks ago
- ☆208Updated 3 weeks ago
- An open collection of methodologies to help with successful training of large language models.☆554Updated last year