mlfoundations / open_lmLinks
A repository for research on medium sized language models.
☆498Updated 2 weeks ago
Alternatives and similar repositories for open_lm
Users that are interested in open_lm are comparing it to the libraries listed below
Sorting:
- Scaling Data-Constrained Language Models☆335Updated 9 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆191Updated 10 months ago
- Large Context Attention☆716Updated 4 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated last month
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆519Updated last month
- A bagel, with everything.☆321Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆731Updated 8 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆303Updated last year
- ☆520Updated 7 months ago
- Inference code for Persimmon-8B☆415Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆617Updated last year
- ☆541Updated 9 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆260Updated 11 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,258Updated 3 months ago
- distributed trainer for LLMs☆577Updated last year
- Official repository for ORPO☆455Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆463Updated last year
- Scalable toolkit for efficient model alignment☆814Updated 3 weeks ago
- ☆415Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆857Updated 2 weeks ago
- Official PyTorch implementation of QA-LoRA☆137Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆277Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆337Updated 6 months ago
- batched loras☆343Updated last year
- Minimalistic large language model 3D-parallelism training☆1,926Updated last week
- [ICML 2024] CLLMs: Consistency Large Language Models☆394Updated 7 months ago
- DataComp: In search of the next generation of multimodal datasets☆717Updated last month
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆640Updated 11 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆881Updated last month