METR / public-tasksLinks
☆104Updated last week
Alternatives and similar repositories for public-tasks
Users that are interested in public-tasks are comparing it to the libraries listed below
Sorting:
- METR Task Standard☆163Updated 8 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆116Updated this week
- ☆113Updated last week
- Open source interpretability artefacts for R1.☆163Updated 6 months ago
- ☆304Updated last year
- ☆142Updated last month
- A toolkit for describing model features and intervening on those features to steer behavior.☆209Updated 11 months ago
- Draw more samples☆194Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆209Updated 2 weeks ago
- ☆29Updated 4 months ago
- ☆60Updated last month
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆104Updated last week
- ☆220Updated 7 months ago
- ☆105Updated this week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆119Updated last year
- Inference API for many LLMs and other useful tools for empirical research☆77Updated last week
- Training-Ready RL Environments + Evals☆132Updated this week
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- Mechanistic Interpretability Visualizations using React☆296Updated 10 months ago
- ☆138Updated 3 months ago
- Tools for studying developmental interpretability in neural networks.☆111Updated 4 months ago
- Benchmarking Agentic LLM and VLM Reasoning On Games☆202Updated 2 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆291Updated 3 weeks ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆229Updated 2 months ago
- Applying SAEs for fine-grained control☆24Updated 10 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆222Updated 10 months ago
- Aidan Bench attempts to measure <big_model_smell> in LLMs.☆312Updated 4 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆233Updated 3 months ago
- ☆128Updated last year