wschella / llm-reliabilityLinks
Code for the paper "Larger and more instructable language models become less reliable"
☆29Updated 7 months ago
Alternatives and similar repositories for llm-reliability
Users that are interested in llm-reliability are comparing it to the libraries listed below
Sorting:
- SCREWS: A Modular Framework for Reasoning with Revisions☆27Updated last year
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆41Updated 7 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 4 months ago
- CiteME is a benchmark designed to test the abilities of language models in finding papers that are cited in scientific texts.☆45Updated 7 months ago
- Discovering Data-driven Hypotheses in the Wild☆85Updated 6 months ago
- Aioli: A unified optimization framework for language model data mixing☆27Updated 4 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆47Updated last year
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆40Updated 2 months ago
- Efficient Dictionary Learning with Switch Sparse Autoencoders (SAEs)☆23Updated 6 months ago
- ☆24Updated 8 months ago
- Are LLMs Capable of Data-based Statistical and Causal Reasoning? Benchmarking Advanced Quantitative Reasoning with Data☆37Updated 3 months ago
- Official Code Release for "Training a Generally Curious Agent"☆22Updated 2 weeks ago
- LitQA Eval: A difficult set of scientific questions that require context of full-text research papers to answer☆39Updated 5 months ago
- ☆45Updated last year
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆36Updated 7 months ago
- ☆32Updated 4 months ago
- ☆28Updated 3 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆114Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- This repository contains ScholarQABench data and evaluation pipeline.☆72Updated last month
- PyTorch implementation for MRL☆18Updated last year
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆42Updated last week
- ☆50Updated last week
- Exploration of automated dataset selection approaches at large scales.☆42Updated 3 months ago
- ☆23Updated 2 months ago
- Official Implementation of the Baby-AIGS system☆23Updated 6 months ago
- ☆21Updated 3 months ago
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.☆29Updated 2 months ago
- A testbed for agents and environments that can automatically improve models through data generation.☆24Updated 3 months ago
- [ACL 2025] Are Your LLMs Capable of Stable Reasoning?☆25Updated 2 months ago