anthropics / evalsLinks
☆307Updated last year
Alternatives and similar repositories for evals
Users that are interested in evals are comparing it to the libraries listed below
Sorting:
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆212Updated this week
- METR Task Standard☆167Updated 9 months ago
- ☆139Updated 3 months ago
- Mechanistic Interpretability Visualizations using React☆299Updated 10 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆94Updated 2 years ago
- ☆281Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆215Updated 5 months ago
- Extract full next-token probabilities via language model APIs☆247Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.☆212Updated last year
- ☆252Updated last year
- ☆246Updated 2 years ago
- ☆117Updated 3 weeks ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆226Updated 10 months ago
- ☆106Updated this week
- Erasing concepts from neural representations with provable guarantees☆239Updated 9 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆539Updated 3 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆127Updated this week
- Collection of evals for Inspect AI☆280Updated last week
- Code and data for "Measuring and Narrowing the Compositionality Gap in Language Models"☆323Updated last year
- Aligning AI With Shared Human Values (ICLR 2021)☆303Updated 2 years ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆121Updated last year
- ☆297Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆238Updated 8 months ago
- Draw more samples☆194Updated last year
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆217Updated last year
- ☆131Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- ☆61Updated last month
- A dataset of alignment research and code to reproduce it☆78Updated 2 years ago