RobertCsordas / linear_layer_as_attentionLinks
The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention".
☆16Updated 5 months ago
Alternatives and similar repositories for linear_layer_as_attention
Users that are interested in linear_layer_as_attention are comparing it to the libraries listed below
Sorting:
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆28Updated 4 years ago
- ☆26Updated 3 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆49Updated 4 years ago
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- ☆14Updated 4 years ago
- ☆52Updated last year
- Code for T-MARS data filtering☆35Updated 2 years ago
- Un-*** 50 billions multimodality dataset☆22Updated 3 years ago
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated 2 years ago
- An adaptive training algorithm for residual network☆17Updated 5 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆31Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- [ICML2023] Instant Soup Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. Ajay Jaiswal, Shiwei Liu, Ti…☆11Updated last year
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- ☆18Updated 3 years ago
- Directed masked autoencoders☆14Updated 2 years ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated 2 years ago
- ☆27Updated last year
- PyTorch codes for the paper "An Empirical Study of Multimodal Model Merging"☆37Updated 2 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- ☆42Updated 2 years ago
- Experiments for "A Closer Look at In-Context Learning under Distribution Shifts"☆19Updated 2 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆18Updated last year
- Official code for the paper: "Metadata Archaeology"☆19Updated 2 years ago
- ☆25Updated 2 years ago
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago