allenai / dolmaLinks
Data and tools for generating and inspecting OLMo pre-training data.
☆1,340Updated last week
Alternatives and similar repositories for dolma
Users that are interested in dolma are comparing it to the libraries listed below
Sorting:
- ☆552Updated 11 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,078Updated last week
- DataComp for Language Models☆1,385Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,927Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,715Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,634Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,627Updated last year
- Train Models Contrastively in Pytorch☆753Updated 7 months ago
- Minimalistic large language model 3D-parallelism training☆2,311Updated 2 months ago
- AllenAI's post-training codebase☆3,284Updated this week
- Evaluation suite for LLMs☆365Updated 4 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,896Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆894Updated last month
- Generative Representational Instruction Tuning☆677Updated 4 months ago
- Scalable toolkit for efficient model alignment☆844Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,526Updated 9 months ago
- Measuring Massive Multitask Language Understanding | ICLR 2021☆1,514Updated 2 years ago
- distributed trainer for LLMs☆583Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- ☆1,035Updated 10 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆629Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆970Updated last year
- Code for Quiet-STaR☆741Updated last year
- Official repository for ORPO☆463Updated last year
- DataDreamer: Prompt. Generate Synthetic Data. Train & Align Models. 🤖💤☆1,073Updated 9 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆899Updated last month
- Best practices for distilling large language models.☆583Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆729Updated last year
- Recipes to scale inference-time compute of open models☆1,117Updated 5 months ago