anthropics / evalsLinks
☆316Updated last year
Alternatives and similar repositories for evals
Users that are interested in evals are comparing it to the libraries listed below
Sorting:
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆216Updated this week
- METR Task Standard☆169Updated 10 months ago
- ☆124Updated 2 months ago
- Mechanistic Interpretability Visualizations using React☆303Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆99Updated 2 years ago
- ☆143Updated 5 months ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆215Updated 6 months ago
- A toolkit for describing model features and intervening on those features to steer behavior.☆223Updated last week
- Extract full next-token probabilities via language model APIs☆248Updated last year
- ☆283Updated last year
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆143Updated last week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆122Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆554Updated 4 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆233Updated last year
- Draw more samples☆198Updated last year
- ☆260Updated last year
- ☆111Updated last month
- ☆249Updated 3 years ago
- ☆132Updated 2 years ago
- Erasing concepts from neural representations with provable guarantees☆240Updated 10 months ago
- Aligning AI With Shared Human Values (ICLR 2021)☆306Updated 2 years ago
- ☆301Updated 2 years ago
- RuLES: a benchmark for evaluating rule-following in language models☆244Updated 10 months ago
- Utilities for the HuggingFace transformers library☆73Updated 2 years ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆100Updated 2 years ago
- ☆134Updated last year
- Steering vectors for transformer language models in Pytorch / Huggingface