CentreSecuriteIA / BELLSLinks
Benchmarks for the Evaluation of LLM Supervision
☆32Updated last month
Alternatives and similar repositories for BELLS
Users that are interested in BELLS are comparing it to the libraries listed below
Sorting:
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆82Updated this week
- Collection of evals for Inspect AI☆201Updated this week
- Inspect: A framework for large language model evaluations☆1,225Updated this week
- METR Task Standard☆157Updated 6 months ago
- Mechanistic Interpretability Visualizations using React☆273Updated 7 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆257Updated last year
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆113Updated last year
- An open-source compliance-centered evaluation framework for Generative AI models☆159Updated this week
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆786Updated this week
- This repository collects all relevant resources about interpretability in LLMs☆368Updated 9 months ago
- Tools for studying developmental interpretability in neural networks.☆100Updated last month
- 🧠 Starter templates for doing interpretability research☆73Updated 2 years ago
- AI Verify☆27Updated this week
- Croissant is a high-level format for machine learning datasets that brings together four rich layers.☆682Updated this week
- ☆16Updated this week
- Tools for understanding how transformer predictions are built layer-by-layer☆512Updated last year
- ☆45Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆189Updated 3 months ago
- ☆99Updated 4 months ago
- large population models☆390Updated 2 weeks ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆622Updated last week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆210Updated 7 months ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆221Updated last year
- Moonshot - A simple and modular tool to evaluate and red-team any LLM application.☆262Updated this week
- A python sdk for LLM finetuning and inference on runpod infrastructure☆12Updated this week
- A library for generative social simulation☆984Updated this week
- Inference-time scaling for LLMs-as-a-judge.☆272Updated 3 weeks ago
- ☆245Updated 4 months ago
- Vivaria is METR's tool for running evaluations and conducting agent elicitation research.☆103Updated last week
- Open source interpretability artefacts for R1.☆157Updated 3 months ago