allenai / CommonGen-Eval
Evaluating LLMs with CommonGen-Lite
☆88Updated 11 months ago
Alternatives and similar repositories for CommonGen-Eval:
Users that are interested in CommonGen-Eval are comparing it to the libraries listed below
- Mixing Language Models with Self-Verification and Meta-Verification☆100Updated 2 months ago
- Data preparation code for Amber 7B LLM☆85Updated 9 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 6 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- ☆117Updated 4 months ago
- Code repository for the c-BTM paper☆105Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 5 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆76Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆130Updated this week
- Functional Benchmarks and the Reasoning Gap☆82Updated 4 months ago
- Simple replication of [ColBERT-v1](https://arxiv.org/abs/2004.12832).☆79Updated 11 months ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆100Updated 10 months ago
- Pre-training code for CrystalCoder 7B LLM☆55Updated 9 months ago
- ☆48Updated 3 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Data preparation code for CrystalCoder 7B LLM☆44Updated 9 months ago
- Evaluating LLMs with fewer examples☆145Updated 10 months ago
- This is the official repository for Inheritune.☆109Updated last week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆167Updated last month
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆94Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆215Updated 10 months ago
- Open Implementations of LLM Analyses☆98Updated 4 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆48Updated 7 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆65Updated 6 months ago
- ☆108Updated 3 weeks ago
- Advanced Reasoning Benchmark Dataset for LLMs☆45Updated last year
- Code for ExploreTom☆75Updated 2 months ago
- Official repo of Respond-and-Respond: data, code, and evaluation☆104Updated 6 months ago