RobertCsordas / linear_layer_as_attentionLinks
The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention".
☆16Updated last month
Alternatives and similar repositories for linear_layer_as_attention
Users that are interested in linear_layer_as_attention are comparing it to the libraries listed below
Sorting:
- ☆51Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- Code for T-MARS data filtering☆35Updated last year
- Code for the paper "Query-Key Normalization for Transformers"☆43Updated 4 years ago
- ☆26Updated 3 years ago
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆27Updated 3 years ago
- ☆21Updated 2 years ago
- ☆18Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated 10 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated 2 years ago
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- Unofficial implementation of paper : Exploring the Space of Key-Value-Query Models with Intention☆12Updated 2 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- ☆29Updated 2 years ago
- Curse-of-memory phenomenon of RNNs in sequence modelling☆19Updated 2 months ago
- Code for paper: "LASeR: Learning to Adaptively Select Reward Models with Multi-Arm Bandits"☆13Updated 9 months ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- ☆20Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 2 years ago
- Blog post☆17Updated last year
- ☆16Updated 11 months ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆17Updated last year
- Official code for the paper: "Metadata Archaeology"☆19Updated 2 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 9 months ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated last year
- Companion repository to "Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models"☆13Updated 2 years ago
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆54Updated 2 years ago
- ☆26Updated last year