anthropics / evals
☆260Updated 8 months ago
Alternatives and similar repositories for evals:
Users that are interested in evals are comparing it to the libraries listed below
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆197Updated this week
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆73Updated last year
- Mechanistic Interpretability Visualizations using React☆233Updated 3 months ago
- ☆130Updated 4 months ago
- ☆211Updated 5 months ago
- ☆232Updated 2 years ago
- Code and data for "Measuring and Narrowing the Compositionality Gap in Language Models"☆310Updated last year
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆470Updated last year
- [ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets☆214Updated last year
- [NeurIPS 2023 D&B] Code repository for InterCode benchmark https://arxiv.org/abs/2306.14898☆209Updated 10 months ago
- Erasing concepts from neural representations with provable guarantees☆226Updated last month
- ☆121Updated last year
- Used for adaptive human in the loop evaluation of language and embedding models.☆306Updated 2 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆204Updated 2 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆90Updated last month
- Extract full next-token probabilities via language model APIs☆233Updated last year
- A dataset of alignment research and code to reproduce it☆74Updated last year
- A repository for transformer critique learning and generation☆89Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆112Updated last year
- ☆262Updated last year
- ☆159Updated 2 years ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- RuLES: a benchmark for evaluating rule-following in language models☆220Updated 3 weeks ago
- Collection of evals for Inspect AI☆92Updated this week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆116Updated 2 years ago
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆177Updated last year
- Tools for understanding how transformer predictions are built layer-by-layer☆479Updated 9 months ago
- A set of utilities for running few-shot prompting experiments on large-language models☆118Updated last year