NVIDIA / NeMo-SkillsLinks
A project to improve skills of large language models
☆413Updated this week
Alternatives and similar repositories for NeMo-Skills
Users that are interested in NeMo-Skills are comparing it to the libraries listed below
Sorting:
- Scalable toolkit for efficient model alignment☆803Updated 2 weeks ago
- Scalable toolkit for efficient model reinforcement☆361Updated this week
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆367Updated last week
- Reproducible, flexible LLM evaluations☆203Updated 3 weeks ago
- ☆731Updated last month
- PyTorch building blocks for the OLMo ecosystem☆222Updated this week
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆461Updated last year
- SkyRL-v0: Train Real-World Long-Horizon Agents via Reinforcement Learning☆343Updated last week
- RewardBench: the first evaluation tool for reward models.☆582Updated this week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆410Updated 7 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆353Updated 8 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆333Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- Tina: Tiny Reasoning Models via LoRA☆245Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆725Updated 8 months ago
- ☆293Updated this week
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆301Updated 3 weeks ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆326Updated 8 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆417Updated last year
- ☆517Updated 6 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆248Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆249Updated this week
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆231Updated 3 weeks ago
- Official repository for ORPO☆453Updated last year
- Automatic evals for LLMs☆399Updated this week
- A family of compressed models obtained via pruning and knowledge distillation☆341Updated 6 months ago
- ☆554Updated last month
- ☆188Updated 3 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆228Updated 9 months ago
- ☆282Updated 10 months ago