catherinesyeh / attention-vizLinks
Visualizing query-key interactions in language + vision transformers (VIS 2023)
☆156Updated last year
Alternatives and similar repositories for attention-viz
Users that are interested in attention-viz are comparing it to the libraries listed below
Sorting:
- Extracting spatial and temporal world models from LLMs☆257Updated 2 years ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆191Updated 2 years ago
- Tools for understanding how transformer predictions are built layer-by-layer☆549Updated 3 months ago
- Scaling Data-Constrained Language Models☆342Updated 5 months ago
- ☆111Updated 9 months ago
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆130Updated 3 years ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆269Updated 6 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆194Updated last year
- ☆143Updated 2 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆178Updated last year
- ☆297Updated 2 years ago
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆224Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆205Updated last year
- ☆69Updated last year
- Editing Models with Task Arithmetic☆515Updated last year
- ☆166Updated 2 years ago
- ☆316Updated last year
- Implementation of the Llama architecture with RLHF + Q-learning☆168Updated 9 months ago
- Code repository for Black Mamba☆260Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆228Updated 11 months ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆90Updated last year
- Recurrent Memory Transformer☆154Updated 2 years ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆201Updated 2 years ago
- ☆205Updated 2 weeks ago
- ☆136Updated 2 years ago
- ☆150Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆257Updated 2 years ago
- ☆38Updated last year
- Erasing concepts from neural representations with provable guarantees☆239Updated 10 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆240Updated 9 months ago