WildEval / ZeroEval
A simple unified framework for evaluating LLMs
☆164Updated 3 weeks ago
Alternatives and similar repositories for ZeroEval:
Users that are interested in ZeroEval are comparing it to the libraries listed below
- Benchmarking LLMs with Challenging Tasks from Real Users☆206Updated 2 months ago
- The official evaluation suite and dynamic data release for MixEval.☆233Updated 2 months ago
- ☆135Updated 3 months ago
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆204Updated 7 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆197Updated 2 months ago
- LongEmbed: Extending Embedding Models for Long Context Retrieval (EMNLP 2024)☆126Updated 2 months ago
- ☆89Updated this week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆157Updated this week
- Code for Paper: Training Software Engineering Agents and Verifiers with SWE-Gym☆202Updated this week
- ☆93Updated 6 months ago
- 🌍 Repository for "AppWorld: A Controllable World of Apps and People for Benchmarking Interactive Coding Agent", ACL'24 Best Resource Pap…☆134Updated last month
- Functional Benchmarks and the Reasoning Gap☆82Updated 3 months ago
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆174Updated 5 months ago
- Code for the paper "Fishing for Magikarp"☆139Updated this week
- ☆115Updated 3 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆175Updated last month
- This is the official repository for Inheritune.☆108Updated 3 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆153Updated 3 months ago
- Automatic Evals for Instruction-Tuned Models☆100Updated this week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆154Updated 2 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆297Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆247Updated 6 months ago
- Reproducible, flexible LLM evaluations☆118Updated last month
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆129Updated 2 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆114Updated 7 months ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆164Updated 2 months ago
- Expert Specialized Fine-Tuning☆167Updated 3 months ago
- ☆303Updated 7 months ago
- awesome synthetic (text) datasets☆253Updated 2 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆178Updated last month