alignedai / HappyFacesLinks
The Happy Faces Benchmark
☆15Updated 2 years ago
Alternatives and similar repositories for HappyFaces
Users that are interested in HappyFaces are comparing it to the libraries listed below
Sorting:
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- A library for bridging Python and HTML/Javascript (via Svelte) for creating interactive visualizations☆199Updated 3 years ago
- Tools for studying developmental interpretability in neural networks.☆105Updated 3 months ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Neural Networks and the Chomsky Hierarchy☆209Updated last year
- ☆22Updated 4 years ago
- Language-annotated Abstraction and Reasoning Corpus☆93Updated 2 years ago
- ☆65Updated 2 years ago
- Mechanistic Interpretability Visualizations using React☆291Updated 10 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 3 years ago
- Utilities for the HuggingFace transformers library☆72Updated 2 years ago
- ☆69Updated 3 years ago
- ☆128Updated last year
- Erasing concepts from neural representations with provable guarantees☆236Updated 8 months ago
- we got you bro☆36Updated last year
- ☆244Updated last year
- (Model-written) LLM evals library☆18Updated 10 months ago
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- ControlArena is a collection of settings, model organisms and protocols - for running control experiments.☆100Updated last week
- A domain-specific probabilistic programming language for modeling and inference with language models☆136Updated 5 months ago
- unofficial re-implementation of "Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets"☆79Updated 3 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆228Updated 2 months ago
- 🧠 Starter templates for doing interpretability research☆75Updated 2 years ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆208Updated last week
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- ☆278Updated last year
- PyTorch and NNsight implementation of AtP* (Kramar et al 2024, DeepMind)☆20Updated 8 months ago
- ☆29Updated last year
- Machine Learning for Alignment Bootcamp☆79Updated 3 years ago
- Probabilistic programming with large language models☆139Updated 2 months ago