NVIDIA / NeMo-Skills
A project to improve skills of large language models
☆275Updated this week
Alternatives and similar repositories for NeMo-Skills:
Users that are interested in NeMo-Skills are comparing it to the libraries listed below
- Reproducible, flexible LLM evaluations☆186Updated 3 weeks ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆300Updated last year
- Benchmarking LLMs with Challenging Tasks from Real Users☆221Updated 5 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆313Updated last week
- DSIR large-scale data selection framework for language model training☆245Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆317Updated 6 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆457Updated last year
- Scalable toolkit for efficient model alignment☆761Updated this week
- The official evaluation suite and dynamic data release for MixEval.☆234Updated 5 months ago
- ☆148Updated 3 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆405Updated 11 months ago
- ☆255Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆174Updated last month
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆196Updated this week
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆196Updated 11 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆205Updated 10 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆54Updated 6 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆254Updated 9 months ago
- ☆265Updated 8 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆402Updated 5 months ago
- Official repository for ORPO☆447Updated 10 months ago
- PyTorch building blocks for the OLMo ecosystem☆191Updated this week
- LOFT: A 1 Million+ Token Long-Context Benchmark☆184Updated last week
- ☆617Updated 2 weeks ago
- RewardBench: the first evaluation tool for reward models.☆547Updated last month
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆354Updated 7 months ago
- ☆509Updated 4 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆180Updated 3 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆313Updated 4 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆234Updated last month