ConsequentAI / fnevalLinks
Functional Benchmarks and the Reasoning Gap
☆89Updated last year
Alternatives and similar repositories for fneval
Users that are interested in fneval are comparing it to the libraries listed below
Sorting:
- Evaluating LLMs with fewer examples☆169Updated last year
- ☆123Updated 11 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆153Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- ☆130Updated last year
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 11 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- Replicating O1 inference-time scaling laws☆92Updated last year
- ☆74Updated last year
- ☆91Updated last month
- Official repo for Learning to Reason for Long-Form Story Generation☆74Updated 9 months ago
- Evaluating LLMs with CommonGen-Lite☆94Updated last year
- ☆99Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆119Updated 2 years ago
- Mixing Language Models with Self-Verification and Meta-Verification☆112Updated last year
- ☆152Updated 5 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆115Updated 8 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- The first dense retrieval model that can be prompted like an LM☆90Updated 9 months ago
- ☆105Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 4 months ago
- ☆54Updated last year
- ☆56Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆83Updated last year
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆94Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆78Updated last year