NVIDIA / NeMo-AlignerLinks
Scalable toolkit for efficient model alignment
☆846Updated 2 months ago
Alternatives and similar repositories for NeMo-Aligner
Users that are interested in NeMo-Aligner are comparing it to the libraries listed below
Sorting:
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- A project to improve skills of large language models☆665Updated last week
- distributed trainer for LLMs☆584Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆634Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,307Updated 9 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆930Updated 3 months ago
- Large Context Attention☆754Updated 2 months ago
- Scalable toolkit for efficient model reinforcement☆1,141Updated this week
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆896Updated 2 months ago
- ☆1,035Updated 5 months ago
- Ring attention implementation with flash attention☆949Updated 3 months ago
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆798Updated 9 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆478Updated last year
- Official repository for ORPO☆468Updated last year
- Minimalistic large language model 3D-parallelism training☆2,365Updated last week
- ☆558Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆934Updated 10 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,166Updated 2 months ago
- Recipes to scale inference-time compute of open models☆1,120Updated 7 months ago
- RewardBench: the first evaluation tool for reward models.☆670Updated 6 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆582Updated last month
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆446Updated last year
- A repository for research on medium sized language models.☆524Updated 6 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆361Updated last year
- HuggingFace conversion and training library for Megatron-based models☆295Updated this week
- PyTorch building blocks for the OLMo ecosystem☆563Updated last week
- Super-Efficient RLHF Training of LLMs with Parameter Reallocation☆328Updated 7 months ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,394Updated this week
- ☆969Updated 10 months ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆350Updated last year