NVIDIA-NeMo / EvaluatorLinks
Open-source library for scalable, reproducible evaluation of AI models and benchmarks.
☆173Updated this week
Alternatives and similar repositories for Evaluator
Users that are interested in Evaluator are comparing it to the libraries listed below
Sorting:
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆267Updated this week
- Manage scalable open LLM inference endpoints in Slurm clusters☆278Updated last year
- code for training & evaluating Contextual Document Embedding models☆202Updated 8 months ago
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆276Updated this week
- Reproducible, flexible LLM evaluations☆316Updated last month
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆222Updated 3 weeks ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆173Updated last week
- Complex Function Calling Benchmark.☆160Updated 11 months ago
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆380Updated 6 months ago
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆256Updated this week
- ☆218Updated 2 months ago
- Let's build better datasets, together!☆267Updated last year
- ☆138Updated 4 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Updated 2 years ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆343Updated 3 weeks ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- Collection of scripts and notebooks for OpenAI's latest GPT OSS models☆494Updated 4 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆306Updated last month
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆244Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆153Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆351Updated 8 months ago
- Code for the paper "Fishing for Magikarp"☆178Updated 7 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆112Updated 8 months ago
- awesome synthetic (text) datasets☆320Updated this week
- Official repository for paper "ReasonIR Training Retrievers for Reasoning Tasks".☆213Updated 6 months ago
- Load compute kernels from the Hub☆359Updated this week
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆236Updated 4 months ago
- ☆224Updated last month
- Evaluating LLMs with fewer examples☆169Updated last year
- PyTorch building blocks for the OLMo ecosystem☆681Updated this week