vdlad / Remarkable-Robustness-of-LLMs
Codebase the paper "The Remarkable Robustness of LLMs: Stages of Inference?"
☆16Updated 7 months ago
Alternatives and similar repositories for Remarkable-Robustness-of-LLMs:
Users that are interested in Remarkable-Robustness-of-LLMs are comparing it to the libraries listed below
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆54Updated 5 months ago
- ☆48Updated 3 months ago
- Evaluation of neuro-symbolic engines☆34Updated 6 months ago
- Codebase accompanying the Summary of a Haystack paper.☆74Updated 5 months ago
- ☆21Updated 3 weeks ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆69Updated 2 months ago
- ☆67Updated 6 months ago
- Open source replication of Anthropic's Crosscoders for Model Diffing☆39Updated 3 months ago
- DSBench: How Far are Data Science Agents from Becoming Data Science Experts?☆43Updated this week
- Aioli: A unified optimization framework for language model data mixing☆20Updated last month
- Functional Benchmarks and the Reasoning Gap☆82Updated 4 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆28Updated 2 weeks ago
- ☆48Updated last year
- FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions☆42Updated 7 months ago
- PyTorch implementation for MRL☆18Updated 11 months ago
- ☆22Updated 2 months ago
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆24Updated 3 months ago
- ☆58Updated 9 months ago
- ☆17Updated 4 months ago
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆24Updated 2 months ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆43Updated last month
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆25Updated last week
- ☆19Updated 4 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆81Updated 11 months ago
- ☆23Updated 5 months ago
- ☆26Updated last month
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆98Updated 4 months ago
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆27Updated 6 months ago