CentreSecuriteIA / BELLSLinks
Benchmarks for the Evaluation of LLM Supervision
☆32Updated this week
Alternatives and similar repositories for BELLS
Users that are interested in BELLS are comparing it to the libraries listed below
Sorting:
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆99Updated this week
- Collection of evals for Inspect AI☆250Updated this week
- Inspect: A framework for large language model evaluations☆1,387Updated this week
- METR Task Standard☆163Updated 8 months ago
- An open-source compliance-centered evaluation framework for Generative AI models☆167Updated this week
- AI Verify☆35Updated this week
- ☆35Updated 2 weeks ago
- Mechanistic Interpretability Visualizations using React☆291Updated 9 months ago
- ☆16Updated this week
- ☆74Updated 2 years ago
- This repository collects all relevant resources about interpretability in LLMs☆374Updated 11 months ago
- ☆735Updated last week
- ☆47Updated last year
- ☆104Updated 2 weeks ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆819Updated last month
- Tools for studying developmental interpretability in neural networks.☆105Updated 3 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆228Updated 2 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆269Updated last year
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆676Updated this week
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆193Updated 5 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆745Updated last year
- The Foundation Model Transparency Index☆83Updated last year
- Red-Teaming Language Models with DSPy☆216Updated 7 months ago
- ☆59Updated 2 weeks ago
- Tools for understanding how transformer predictions are built layer-by-layer☆530Updated 2 months ago
- 🧠 Starter templates for doing interpretability research☆75Updated 2 years ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆118Updated last year
- Training Sparse Autoencoders on Language Models☆985Updated this week
- ☆216Updated 7 months ago
- Decoder only transformer, built from scratch with PyTorch☆31Updated last year