NVIDIA-NeMo / EvaluatorLinks
Open-source library for scalable, reproducible evaluation of AI models and benchmarks.
☆106Updated this week
Alternatives and similar repositories for Evaluator
Users that are interested in Evaluator are comparing it to the libraries listed below
Sorting:
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆257Updated this week
- Manage scalable open LLM inference endpoints in Slurm clusters☆277Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆375Updated 5 months ago
- Reproducible, flexible LLM evaluations☆293Updated 2 weeks ago
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆220Updated last month
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆349Updated 7 months ago
- Benchmarking library for RAG☆248Updated last month
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆224Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆258Updated 2 years ago
- PyTorch building blocks for the OLMo ecosystem☆482Updated this week
- Complex Function Calling Benchmark.☆149Updated 10 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆315Updated last year
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆112Updated this week
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆226Updated 3 months ago
- The HELMET Benchmark☆187Updated 3 months ago
- code for training & evaluating Contextual Document Embedding models☆201Updated 6 months ago
- Automatic evals for LLMs☆559Updated 5 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆322Updated this week
- Code for Zero-Shot Tokenizer Transfer☆142Updated 10 months ago
- Official Code for M-RᴇᴡᴀʀᴅBᴇɴᴄʜ: Evaluating Reward Models in Multilingual Settings (ACL 2025 Main)☆38Updated 6 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆265Updated this week
- LOFT: A 1 Million+ Token Long-Context Benchmark☆218Updated 5 months ago
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆151Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆359Updated last month
- LLM-Merging: Building LLMs Efficiently through Merging☆207Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆69Updated 7 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆243Updated last year
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆209Updated 5 months ago
- A simplified implementation for experimenting with RLVR on GSM8K, This repository provides a starting point for exploring reasoning.☆145Updated 10 months ago
- ☆36Updated 3 months ago