NVIDIA / NeMo-AlignerLinks
Scalable toolkit for efficient model alignment
☆820Updated this week
Alternatives and similar repositories for NeMo-Aligner
Users that are interested in NeMo-Aligner are comparing it to the libraries listed below
Sorting:
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆734Updated 9 months ago
- A project to improve skills of large language models☆456Updated this week
- distributed trainer for LLMs☆578Updated last year
- Scalable toolkit for efficient model reinforcement☆478Updated this week
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆618Updated last year
- Minimalistic large language model 3D-parallelism training☆2,012Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,259Updated 4 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆904Updated 4 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆865Updated 2 weeks ago
- Ring attention implementation with flash attention☆800Updated last week
- OLMoE: Open Mixture-of-Experts Language Models☆798Updated 3 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆728Updated 3 months ago
- Recipes to scale inference-time compute of open models☆1,101Updated last month
- Large Context Attention☆718Updated 5 months ago
- ☆824Updated last week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆463Updated last year
- ☆523Updated 7 months ago
- Official repository for ORPO☆457Updated last year
- RewardBench: the first evaluation tool for reward models.☆609Updated last month
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆421Updated 8 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,061Updated 2 weeks ago
- A repository for research on medium sized language models.☆502Updated last month
- SkyRL: A Modular Full-stack RL Library for LLMs☆574Updated this week
- ☆946Updated 5 months ago
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆305Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,684Updated last week
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆341Updated 9 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆343Updated 7 months ago
- A flexible and efficient training framework for large-scale alignment tasks☆385Updated this week
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,371Updated last week