vdlad / Remarkable-Robustness-of-LLMsLinks
Codebase the paper "The Remarkable Robustness of LLMs: Stages of Inference?"
☆18Updated 4 months ago
Alternatives and similar repositories for Remarkable-Robustness-of-LLMs
Users that are interested in Remarkable-Robustness-of-LLMs are comparing it to the libraries listed below
Sorting:
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆109Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- a curated list of the role of small models in the LLM era☆105Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- Evaluating LLMs with fewer examples☆163Updated last year
- Verifiers for LLM Reinforcement Learning☆74Updated 6 months ago
- ☆80Updated last week
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 8 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated 10 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆42Updated last year
- ☆78Updated 8 months ago
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆40Updated last year
- Aioli: A unified optimization framework for language model data mixing☆27Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Dataset and evaluation suite enabling LLM instruction-following for scientific literature understanding.☆42Updated 6 months ago
- ☆69Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆90Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆136Updated 3 months ago
- NeurIPS 2024 tutorial on LLM Inference☆47Updated 10 months ago
- ☆19Updated 2 months ago
- ☆51Updated 6 months ago
- ☆23Updated 8 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆49Updated 8 months ago
- Dynamic Cheatsheet: Test-Time Learning with Adaptive Memory☆97Updated 4 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆102Updated 5 months ago
- [ICLR 2025] DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆76Updated last month
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆168Updated this week
- ☆109Updated 8 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 10 months ago