NVIDIA / NeMo-Skills
A project to improve skills of large language models
☆248Updated this week
Alternatives and similar repositories for NeMo-Skills:
Users that are interested in NeMo-Skills are comparing it to the libraries listed below
- Reproducible, flexible LLM evaluations☆160Updated 2 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 9 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆296Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆389Updated 4 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆451Updated 11 months ago
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆307Updated 4 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆215Updated 3 months ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆194Updated last week
- The official evaluation suite and dynamic data release for MixEval.☆231Updated 3 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆153Updated 2 months ago
- ☆251Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆125Updated 7 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆135Updated 3 months ago
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆172Updated 9 months ago
- ☆149Updated last week
- ☆257Updated 6 months ago
- ☆130Updated 2 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆252Updated 7 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆203Updated 3 months ago
- ☆319Updated 2 weeks ago
- RewardBench: the first evaluation tool for reward models.☆505Updated this week
- Repo of paper "Free Process Rewards without Process Labels"☆123Updated last month
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆100Updated 7 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆184Updated 5 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆348Updated 5 months ago
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆103Updated last week
- LLM-Merging: Building LLMs Efficiently through Merging☆190Updated 4 months ago