ConsequentAI / fnevalLinks
Functional Benchmarks and the Reasoning Gap
☆88Updated 9 months ago
Alternatives and similar repositories for fneval
Users that are interested in fneval are comparing it to the libraries listed below
Sorting:
- Evaluating LLMs with fewer examples☆160Updated last year
- Repository for the paper Stream of Search: Learning to Search in Language☆149Updated 5 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- ☆117Updated 4 months ago
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- ☆66Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated 10 months ago
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆58Updated 7 months ago
- ☆69Updated last month
- ☆134Updated 3 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆173Updated 4 months ago
- ☆124Updated 9 months ago
- ☆68Updated 10 months ago
- ☆98Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆107Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 5 months ago
- ☆52Updated 8 months ago
- Just a bunch of benchmark logs for different LLMs☆119Updated 11 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆106Updated last month
- Replicating O1 inference-time scaling laws☆89Updated 7 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆103Updated 2 months ago
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆113Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆65Updated 2 months ago
- ☆150Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆222Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆86Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆220Updated 7 months ago
- The first dense retrieval model that can be prompted like an LM☆80Updated 2 months ago
- Can Language Models Solve Olympiad Programming?☆119Updated 5 months ago
- Benchmarking LLMs with Challenging Tasks from Real Users☆228Updated 8 months ago