anthropics / evals
☆266Updated 9 months ago
Alternatives and similar repositories for evals:
Users that are interested in evals are comparing it to the libraries listed below
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆198Updated last week
- ☆264Updated last year
- Mechanistic Interpretability Visualizations using React☆239Updated 3 months ago
- ☆132Updated 5 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆220Updated last month
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆74Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆207Updated 3 months ago
- ☆236Updated 2 years ago
- DialOp: Decision-oriented dialogue environments for collaborative language agents☆106Updated 5 months ago
- METR Task Standard☆147Updated 2 months ago
- ☆217Updated 6 months ago
- A dataset of alignment research and code to reproduce it☆77Updated last year
- Extract full next-token probabilities via language model APIs☆240Updated last year
- Functional Benchmarks and the Reasoning Gap☆85Updated 6 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆92Updated last year
- Code and data for "Measuring and Narrowing the Compositionality Gap in Language Models"☆312Updated last year
- A repository for transformer critique learning and generation☆89Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆194Updated 4 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆115Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆100Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆119Updated 2 years ago
- Steering Llama 2 with Contrastive Activation Addition☆137Updated 10 months ago
- ☆88Updated last month
- ☆121Updated last year
- ☆178Updated 2 years ago
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆216Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆477Updated last year
- Erasing concepts from neural representations with provable guarantees☆228Updated 2 months ago
- Code and data accompanying our paper on arXiv "Faithful Chain-of-Thought Reasoning".☆158Updated 11 months ago