mlfoundations / open_lmLinks
A repository for research on medium sized language models.
☆528Updated 7 months ago
Alternatives and similar repositories for open_lm
Users that are interested in open_lm are comparing it to the libraries listed below
Sorting:
- Scaling Data-Constrained Language Models☆342Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆203Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆260Updated 2 years ago
- ☆559Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 8 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆315Updated 2 years ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆279Updated last year
- Inference code for Persimmon-8B☆412Updated 2 years ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 8 months ago
- distributed trainer for LLMs☆588Updated last year
- Large Context Attention☆762Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆899Updated 3 months ago
- Official repository for ORPO☆469Updated last year
- DSIR large-scale data selection framework for language model training☆268Updated last year
- Scalable toolkit for efficient model alignment☆847Updated 3 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆390Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆737Updated last year
- An open collection of methodologies to help with successful training of large language models.☆550Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆482Updated last year
- A bagel, with everything.☆326Updated last year
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- batched loras☆349Updated 2 years ago
- ☆416Updated 2 years ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆370Updated last year
- Evaluation suite for LLMs☆378Updated 6 months ago
- Annotated version of the Mamba paper☆495Updated last year
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆454Updated last year