CentreSecuriteIA / BELLSLinks
Benchmarks for the Evaluation of LLM Supervision
☆32Updated 2 months ago
Alternatives and similar repositories for BELLS
Users that are interested in BELLS are comparing it to the libraries listed below
Sorting:
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆135Updated last week
- METR Task Standard☆168Updated 10 months ago
- Collection of evals for Inspect AI☆313Updated this week
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆195Updated 8 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆286Updated last year
- An open-source compliance-centered evaluation framework for Generative AI models☆176Updated this week
- Inspect: A framework for large language model evaluations☆1,580Updated last week
- Mechanistic Interpretability Visualizations using React☆304Updated last year
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆838Updated 2 months ago
- Tools for studying developmental interpretability in neural networks.☆117Updated 5 months ago
- This repository collects all relevant resources about interpretability in LLMs☆389Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆234Updated 4 months ago
- AI Verify☆39Updated this week
- ☆38Updated 2 weeks ago
- ☆76Updated 2 years ago
- open source interpretability platform 🧠☆562Updated this week
- ☆845Updated last month
- ☆49Updated last year
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆738Updated this week
- Sparsify transformers with SAEs and transcoders☆673Updated this week
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆294Updated last week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆232Updated last year
- Training Sparse Autoencoders on Language Models☆1,118Updated this week
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆122Updated last year
- ☆17Updated 2 weeks ago
- Tools for understanding how transformer predictions are built layer-by-layer☆554Updated 4 months ago
- 🪄 Interpreto is an interpretability toolbox for LLMs☆84Updated last week
- ☆56Updated 2 months ago
- Improving Alignment and Robustness with Circuit Breakers☆248Updated last year
- ☆260Updated last year