google-deepmind / tracrLinks
☆551Updated 2 years ago
Alternatives and similar repositories for tracr
Users that are interested in tracr are comparing it to the libraries listed below
Sorting:
- An interpreter for RASP as described in the ICML 2021 paper "Thinking Like Transformers"☆323Updated last year
- An interactive exploration of Transformer programming.☆271Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆567Updated 6 months ago
- Neural Networks and the Chomsky Hierarchy☆212Updated last year
- ☆284Updated last year
- Language Modeling with the H3 State Space Model☆522Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆201Updated 2 years ago
- git extension for {collaborative, communal, continual} model development☆217Updated last year
- Keeping language models honest by directly eliciting knowledge encoded in their activations.☆217Updated 2 weeks ago
- Erasing concepts from neural representations with provable guarantees☆243Updated last year
- Draw more samples☆198Updated last year
- ☆259Updated 8 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆693Updated 2 weeks ago
- Extract full next-token probabilities via language model APIs☆248Updated last year
- Mechanistic Interpretability Visualizations using React☆320Updated last year
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆216Updated 3 weeks ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆355Updated last year
- Implementing RASP transformer programming language https://arxiv.org/pdf/2106.06981.pdf.☆59Updated 3 months ago
- Reverse Engineering the Abstraction and Reasoning Corpus☆332Updated 11 months ago
- Automatic gradient descent☆217Updated 2 years ago
- Tools for studying developmental interpretability in neural networks.☆126Updated last month
- Train very large language models in Jax.☆210Updated 2 years ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆198Updated last year
- Code release for "Git Re-Basin: Merging Models modulo Permutation Symmetries"☆502Updated 2 years ago
- Implementation of Memorizing Transformers (ICLR 2022), attention net augmented with indexing and retrieval of memories using approximate …☆639Updated 2 years ago
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆238Updated 5 months ago
- Code for Parsel 🐍 - generate complex programs with language models☆439Updated 2 years ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆135Updated 3 years ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,361Updated last year
- Convolutions for Sequence Modeling☆910Updated last year