NVIDIA / NeMo-Skills
A project to improve skills of large language models
☆259Updated this week
Alternatives and similar repositories for NeMo-Skills:
Users that are interested in NeMo-Skills are comparing it to the libraries listed below
- Reproducible, flexible LLM evaluations☆176Updated 3 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆393Updated 5 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆298Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆166Updated 2 weeks ago
- ☆253Updated last year
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆454Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆189Updated 10 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆224Updated 2 weeks ago
- ☆260Updated last week
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆209Updated 7 months ago
- ☆263Updated 7 months ago
- Explorations into some recent techniques surrounding speculative decoding☆250Updated 3 months ago
- PyTorch building blocks for the OLMo ecosystem☆172Updated this week
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆313Updated 5 months ago
- DSIR large-scale data selection framework for language model training☆244Updated 11 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆129Updated 8 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆206Updated 10 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆250Updated 6 months ago
- A simple unified framework for evaluating LLMs☆206Updated 2 weeks ago
- The official evaluation suite and dynamic data release for MixEval.☆233Updated 4 months ago
- ☆559Updated last week
- Manage scalable open LLM inference endpoints in Slurm clusters☆253Updated 8 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆201Updated last week
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆229Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆405Updated 11 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆209Updated this week
- ☆325Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆186Updated 7 months ago
- Official repository for ORPO☆445Updated 9 months ago
- The HELMET Benchmark☆121Updated last week