mlfoundations / open_lm
A repository for research on medium sized language models.
☆492Updated 3 months ago
Alternatives and similar repositories for open_lm:
Users that are interested in open_lm are comparing it to the libraries listed below
- Scaling Data-Constrained Language Models☆335Updated 6 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆300Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆254Updated 9 months ago
- ☆509Updated 4 months ago
- Scalable toolkit for efficient model alignment☆761Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆510Updated 5 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆187Updated 8 months ago
- A bagel, with everything.☆318Updated last year
- ☆524Updated 7 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- distributed trainer for LLMs☆571Updated 10 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated this week
- batched loras☆341Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆712Updated 6 months ago
- Minimalistic large language model 3D-parallelism training☆1,771Updated this week
- Large Context Attention☆700Updated 2 months ago
- ☆182Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆600Updated last year
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆648Updated 10 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆457Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆829Updated 3 weeks ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆313Updated last week
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆545Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆313Updated 4 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆220Updated last month
- Inference code for Persimmon-8B☆415Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆238Updated this week
- RewardBench: the first evaluation tool for reward models.☆547Updated last month
- ☆412Updated last year
- DSIR large-scale data selection framework for language model training☆245Updated last year