mlfoundations / open_lmLinks
A repository for research on medium sized language models.
☆520Updated 5 months ago
Alternatives and similar repositories for open_lm
Users that are interested in open_lm are comparing it to the libraries listed below
Sorting:
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 6 months ago
- Scaling Data-Constrained Language Models☆342Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆315Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆258Updated 2 years ago
- Multipack distributed sampler for fast padding-free training of LLMs☆202Updated last year
- ☆556Updated last year
- Inference code for Persimmon-8B☆412Updated 2 years ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆277Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- ☆581Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆545Updated 6 months ago
- Large Context Attention☆753Updated last month
- batched loras☆347Updated 2 years ago
- distributed trainer for LLMs☆584Updated last year
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆561Updated 11 months ago
- A bagel, with everything.☆325Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆389Updated last year
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- Official repository for ORPO☆467Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆732Updated last year
- Scalable toolkit for efficient model alignment☆847Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆894Updated 2 months ago
- An open collection of methodologies to help with successful training of large language models.☆540Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆478Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆469Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆240Updated 9 months ago
- DSIR large-scale data selection framework for language model training☆266Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆253Updated last year
- ☆415Updated 2 years ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆632Updated last year