safety-research / false-factsLinks
☆19Updated 2 months ago
Alternatives and similar repositories for false-facts
Users that are interested in false-facts are comparing it to the libraries listed below
Sorting:
- Mechanistic Interpretability for Transformer Models☆51Updated 3 years ago
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆46Updated 9 months ago
- Utilities for the HuggingFace transformers library☆70Updated 2 years ago
- ☆36Updated 2 years ago
- Measuring the situational awareness of language models☆38Updated last year
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Understanding how features learned by neural networks evolve throughout training☆37Updated 10 months ago
- ☆106Updated 6 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆121Updated 6 months ago
- Delphi was the home of a temple to Phoebus Apollo, which famously had the inscription, 'Know Thyself.' This library lets language models …☆207Updated this week
- Codes and files for the paper Are Emergent Abilities in Large Language Models just In-Context Learning☆33Updated 7 months ago
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆19Updated 7 months ago
- Implementation of Influence Function approximations for differently sized ML models, using PyTorch☆15Updated last year
- Neural theorem proving tutorial, version II☆39Updated last year
- ☆53Updated 2 years ago
- ☆85Updated last month
- Open source replication of Anthropic's Crosscoders for Model Diffing☆59Updated 10 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 2 years ago
- An unofficial implementation of the Infini-gram model proposed by Liu et al. (2024)☆33Updated last year
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Attribution-based Parameter Decomposition☆30Updated 2 months ago
- ☆29Updated last year
- ☆23Updated 2 months ago
- This repository contains the code used for the experiments in the paper "Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity…☆28Updated last year
- Sparse Autoencoder Training Library☆54Updated 4 months ago
- ☆36Updated 3 years ago
- ☆68Updated last week
- ☆122Updated last year
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆31Updated 4 months ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year