CentreSecuriteIA / BELLSLinks
Benchmarks for the Evaluation of LLM Supervision
☆33Updated 3 weeks ago
Alternatives and similar repositories for BELLS
Users that are interested in BELLS are comparing it to the libraries listed below
Sorting:
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆153Updated last week
- Collection of evals for Inspect AI☆357Updated last week
- METR Task Standard☆173Updated last year
- This repository collects all relevant resources about interpretability in LLMs☆391Updated last year
- Inspect: A framework for large language model evaluations☆1,727Updated this week
- Mechanistic Interpretability Visualizations using React☆320Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆310Updated last week
- ☆40Updated 3 weeks ago
- ☆77Updated 2 years ago
- 🧠 Starter templates for doing interpretability research☆76Updated 2 years ago
- Sparse Autoencoder for Mechanistic Interpretability☆291Updated last year
- ☆50Updated last year
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆800Updated last week
- This was designed for interp researchers who want to do research on or with interp agents to give quality of life improvements and fix …☆121Updated 3 weeks ago
- An open-source compliance-centered evaluation framework for Generative AI models☆179Updated this week
- Tools for understanding how transformer predictions are built layer-by-layer☆567Updated 6 months ago
- Training Sparse Autoencoders on Language Models☆1,201Updated this week
- ☆17Updated last week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆127Updated last year
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆857Updated 2 weeks ago
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆201Updated 9 months ago
- Improving Alignment and Robustness with Circuit Breakers☆258Updated last year
- Tools for studying developmental interpretability in neural networks.☆126Updated last month
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆240Updated last year
- AI Verify☆47Updated 3 weeks ago
- ☆70Updated last month
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆117Updated last week
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆847Updated last year
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆340Updated 7 months ago
- ☆389Updated 5 months ago