catherinesyeh / attention-viz
Visualizing query-key interactions in language + vision transformers
☆139Updated 10 months ago
Alternatives and similar repositories for attention-viz:
Users that are interested in attention-viz are comparing it to the libraries listed below
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- ☆88Updated last month
- ☆123Updated last month
- Tools for understanding how transformer predictions are built layer-by-layer☆478Updated 9 months ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆70Updated 3 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆186Updated 9 months ago
- Recurrent Memory Transformer☆150Updated last year
- ☆159Updated this week
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated last year
- ☆40Updated 10 months ago
- ☆67Updated 6 months ago
- Inspecting and Editing Knowledge Representations in Language Models☆112Updated last year
- ☆119Updated 5 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated last year
- ☆130Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆130Updated 10 months ago
- Bootstrapping ARC☆103Updated 3 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆174Updated 6 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆113Updated 3 months ago
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆184Updated 2 months ago
- How do transformer LMs encode relations?☆46Updated last year
- Scaling Data-Constrained Language Models☆334Updated 5 months ago
- ☆150Updated last year
- Randomized Positional Encodings Boost Length Generalization of Transformers☆79Updated 11 months ago
- Functional Benchmarks and the Reasoning Gap☆84Updated 5 months ago
- Function Vectors in Large Language Models (ICLR 2024)☆142Updated 5 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆90Updated 2 weeks ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆159Updated 2 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆185Updated 3 months ago
- ☆167Updated last year