CentreSecuriteIA / BELLSLinks
Benchmarks for the Evaluation of LLM Supervision
☆32Updated last week
Alternatives and similar repositories for BELLS
Users that are interested in BELLS are comparing it to the libraries listed below
Sorting:
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆76Updated this week
- Collection of evals for Inspect AI☆178Updated last week
- METR Task Standard☆154Updated 5 months ago
- Inspect: A framework for large language model evaluations☆1,145Updated this week
- This repository collects all relevant resources about interpretability in LLMs☆363Updated 8 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆109Updated last year
- Mechanistic Interpretability Visualizations using React☆262Updated 7 months ago
- Find the samples, in the test data, on which your (generative) model makes mistakes.☆28Updated 9 months ago
- ☆72Updated 2 years ago
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆770Updated this week
- A library for generative social simulation☆936Updated this week
- ☆45Updated 11 months ago
- ☆617Updated last week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆206Updated 7 months ago
- Sparse Autoencoder for Mechanistic Interpretability☆256Updated last year
- ☆15Updated last year
- An open-source compliance-centered evaluation framework for Generative AI models☆158Updated this week
- Tools for studying developmental interpretability in neural networks.☆99Updated 3 weeks ago
- ☆14Updated 2 weeks ago
- large population models☆378Updated last week
- A python sdk for LLM finetuning and inference on runpod infrastructure☆11Updated 2 weeks ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆217Updated last year
- 🧠 Starter templates for doing interpretability research☆72Updated 2 years ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆115Updated 4 months ago
- The nnsight package enables interpreting and manipulating the internals of deep learned models.☆608Updated last week
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆188Updated 3 months ago
- ☆315Updated this week
- A toolkit for describing model features and intervening on those features to steer behavior.☆193Updated 8 months ago
- ☆134Updated 3 months ago
- Inference API for many LLMs and other useful tools for empirical research☆52Updated this week