apple / ml-np-raspLinks
☆20Updated last year
Alternatives and similar repositories for ml-np-rasp
Users that are interested in ml-np-rasp are comparing it to the libraries listed below
Sorting:
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆130Updated 3 years ago
- Mechanistic Interpretability for Transformer Models☆53Updated 3 years ago
- Code Release for "Broken Neural Scaling Laws" (BNSL) paper☆59Updated 2 years ago
- Language-annotated Abstraction and Reasoning Corpus☆98Updated 2 years ago
- A domain-specific probabilistic programming language for modeling and inference with language models☆137Updated 7 months ago
- ☆29Updated last year
- Tools for studying developmental interpretability in neural networks.☆117Updated 5 months ago
- ☆132Updated 2 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆215Updated 6 months ago
- ☆27Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆196Updated 2 years ago
- Implementing RASP transformer programming language https://arxiv.org/pdf/2106.06981.pdf.☆59Updated last month
- Extract full next-token probabilities via language model APIs☆248Updated last year
- ☆70Updated 3 years ago
- An interactive exploration of Transformer programming.☆270Updated 2 years ago
- Neural Networks and the Chomsky Hierarchy☆211Updated last year
- Materials for ConceptARC paper☆109Updated last year
- Erasing concepts from neural representations with provable guarantees☆239Updated 10 months ago
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆215Updated this week
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆323Updated last year
- Attribution-based Parameter Decomposition☆33Updated 6 months ago
- Official code for "Algorithmic Capabilities of Random Transformers" (NeurIPS 2024)☆16Updated last year
- 🧠 Starter templates for doing interpretability research☆74Updated 2 years ago
- Redwood Research's transformer interpretability tools☆14Updated 3 years ago
- Train very large language models in Jax.☆210Updated 2 years ago
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- See the issue board for the current status of active and prospective projects!☆65Updated 3 years ago
- A library to create and manage configuration files, especially for machine learning projects.☆79Updated 3 years ago