RobertCsordas / linear_layer_as_attentionLinks
The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention".
☆16Updated 5 months ago
Alternatives and similar repositories for linear_layer_as_attention
Users that are interested in linear_layer_as_attention are comparing it to the libraries listed below
Sorting:
- Code for T-MARS data filtering☆35Updated 2 years ago
- ☆26Updated 3 years ago
- Code for the paper "Query-Key Normalization for Transformers"☆49Updated 4 years ago
- Official code for the paper "Attention as a Hypernetwork"☆46Updated last year
- ☆14Updated 4 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago
- Repository for the paper Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning☆36Updated 2 years ago
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆28Updated 4 years ago
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- ☆52Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Un-*** 50 billions multimodality dataset☆23Updated 3 years ago
- Stochastic Optimization for Global Contrastive Learning without Large Mini-batches☆20Updated 2 years ago
- Official code for the paper: "Metadata Archaeology"☆19Updated 2 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- ☆18Updated 3 years ago
- [ICML2023] Instant Soup Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. Ajay Jaiswal, Shiwei Liu, Ti…☆11Updated 2 years ago
- ☆27Updated last year
- Patching open-vocabulary models by interpolating weights☆91Updated 2 years ago
- Implementation of "compositional attention" from MILA, a multi-head attention variant that is reframed as a two-step attention process wi…☆51Updated 3 years ago
- ☆23Updated 10 months ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Code for the paper Self-Supervised Learning of Split Invariant Equivariant Representations☆30Updated 2 years ago
- ☆38Updated last year
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆57Updated 2 years ago
- ☆42Updated 2 years ago