NVIDIA / NeMo-Skills
A project to improve skills of large language models
โ354Updated this week
Alternatives and similar repositories for NeMo-Skills:
Users that are interested in NeMo-Skills are comparing it to the libraries listed below
- ๐พ OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.โ336Updated 2 weeks ago
- The official evaluation suite and dynamic data release for MixEval.โ238Updated 5 months ago
- Reproducible, flexible LLM evaluationsโ197Updated last month
- Implementation of paper Data Engineering for Scaling Language Models to 128K Contextโ459Updated last year
- โ671Updated last week
- Scalable toolkit for efficient model alignmentโ778Updated this week
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, sparsโฆโ322Updated 4 months ago
- โ287Updated last month
- โ327Updated 2 months ago
- PyTorch building blocks for the OLMo ecosystemโ205Updated this week
- Automatic evals for LLMsโ376Updated this week
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.โ409Updated last year
- RewardBench: the first evaluation tool for reward models.โ562Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"โ406Updated 6 months ago
- โ524Updated 2 weeks ago
- Benchmarking LLMs with Challenging Tasks from Real Usersโ221Updated 6 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuningโ354Updated 8 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).โ222Updated last month
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"โ220Updated last month
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.โ719Updated 7 months ago
- Codes for the paper "โBench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718โ323Updated 7 months ago
- โ515Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"โ301Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. ๐งฎโจโ208Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024โ291Updated this week
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paperโ136Updated 9 months ago
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gymโ448Updated last month
- Official repository for ORPOโ450Updated 11 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasksโ186Updated 3 weeks ago
- โ276Updated 9 months ago