NVIDIA / NeMo-SkillsLinks
A project to improve skills of large language models
☆456Updated this week
Alternatives and similar repositories for NeMo-Skills
Users that are interested in NeMo-Skills are comparing it to the libraries listed below
Sorting:
- Scalable toolkit for efficient model alignment☆820Updated this week
- Reproducible, flexible LLM evaluations☆215Updated 2 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆404Updated this week
- PyTorch building blocks for the OLMo ecosystem☆258Updated this week
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆421Updated 8 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆306Updated last year
- Automatic evals for LLMs☆461Updated 2 weeks ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆463Updated last year
- ☆824Updated last week
- Scalable toolkit for efficient model reinforcement☆478Updated this week
- Official repository for ORPO☆457Updated last year
- SkyRL: A Modular Full-stack RL Library for LLMs☆574Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆735Updated 9 months ago
- Tina: Tiny Reasoning Models via LoRA☆266Updated last month
- ☆303Updated last month
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆258Updated 3 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆341Updated 7 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆609Updated last month
- ☆585Updated 2 months ago
- ☆523Updated 7 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆410Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆798Updated 3 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆343Updated 7 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆244Updated 2 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆314Updated 2 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆341Updated 9 months ago
- ☆449Updated 11 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆428Updated last year