catherinesyeh / attention-vizLinks
Visualizing query-key interactions in language + vision transformers (VIS 2023)
☆151Updated last year
Alternatives and similar repositories for attention-viz
Users that are interested in attention-viz are comparing it to the libraries listed below
Sorting:
- ☆105Updated 6 months ago
- Code repository for Black Mamba☆254Updated last year
- Extracting spatial and temporal world models from LLMs☆255Updated last year
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated 11 months ago
- Emergent world representations: Exploring a sequence model trained on a synthetic task☆188Updated 2 years ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆192Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 months ago
- ☆69Updated last year
- ☆141Updated 2 weeks ago
- Scaling Data-Constrained Language Models☆340Updated 2 months ago
- ☆293Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆114Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆203Updated 11 months ago
- ☆164Updated last year
- ☆166Updated 2 years ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- ☆127Updated 11 months ago
- ☆53Updated 2 years ago
- ☆154Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆229Updated last month
- Datasets collection and preprocessings framework for NLP extreme multitask learning☆186Updated last month
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Notebooks accompanying Anthropic's "Toy Models of Superposition" paper☆129Updated 2 years ago
- ☆301Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆229Updated 11 months ago
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆152Updated 8 months ago
- Repository for code used in the xVal paper☆142Updated last year
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- ☆149Updated last year