hendrycks / ethics
Aligning AI With Shared Human Values (ICLR 2021)
☆281Updated last year
Alternatives and similar repositories for ethics:
Users that are interested in ethics are comparing it to the libraries listed below
- Repository for the Bias Benchmark for QA dataset.☆106Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆73Updated last year
- LLM experiments done during SERI MATS - focusing on activation steering / interpreting activation spaces☆91Updated last year
- ☆214Updated 6 months ago
- Repository for research in the field of Responsible NLP at Meta.☆198Updated 4 months ago
- ☆104Updated 11 months ago
- ☆131Updated 5 months ago
- ☆128Updated last year
- ☆264Updated 8 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆197Updated last week
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆77Updated 4 years ago
- Steering Llama 2 with Contrastive Activation Addition☆134Updated 10 months ago
- A library for finding knowledge neurons in pretrained transformer models.☆155Updated 3 years ago
- PAIR.withgoogle.com and friend's work on interpretability methods☆173Updated this week
- Mechanistic Interpretability Visualizations using React☆235Updated 3 months ago
- Collection of evals for Inspect AI☆101Updated this week
- ☆263Updated last year
- Code for my NeurIPS 2024 ATTRIB paper titled "Attribution Patching Outperforms Automated Circuit Discovery"☆30Updated 10 months ago
- A library for efficient patching and automatic circuit discovery.☆62Updated last month
- Synthetic question-answering dataset to formally analyze the chain-of-thought output of large language models on a reasoning task.☆141Updated 5 months ago
- Tools for understanding how transformer predictions are built layer-by-layer☆481Updated 9 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆114Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆95Updated last month
- ☆23Updated last month
- Improving Alignment and Robustness with Circuit Breakers☆192Updated 6 months ago
- Utilities for the HuggingFace transformers library☆67Updated 2 years ago
- ☆114Updated 7 months ago
- ☆63Updated last month
- Interpretability for sequence generation models 🐛 🔍☆410Updated 4 months ago
- The Prism Alignment Project☆70Updated 11 months ago