RobertCsordas / linear_layer_as_attentionLinks
The official repository for our paper "The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention".
☆16Updated last month
Alternatives and similar repositories for linear_layer_as_attention
Users that are interested in linear_layer_as_attention are comparing it to the libraries listed below
Sorting:
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- ☆51Updated last year
- Code for the paper "Query-Key Normalization for Transformers"☆45Updated 4 years ago
- Code for T-MARS data filtering☆35Updated last year
- Un-*** 50 billions multimodality dataset☆23Updated 2 years ago
- codebase for the SIMAT dataset and evaluation☆38Updated 3 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆55Updated 11 months ago
- ☆26Updated 3 years ago
- Why Do We Need Weight Decay in Modern Deep Learning? [NeurIPS 2024]☆66Updated 10 months ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Official code for "Accelerating Feedforward Computation via Parallel Nonlinear Equation Solving", ICML 2021☆27Updated 3 years ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆23Updated last year
- Implementation of some personal helper functions for Einops, my most favorite tensor manipulation library ❤️☆55Updated 2 years ago
- ☆20Updated last year
- ☆13Updated 3 years ago
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆34Updated 2 years ago
- [ICML2023] Instant Soup Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models. Ajay Jaiswal, Shiwei Liu, Ti…☆11Updated last year
- Latest Weight Averaging (NeurIPS HITY 2022)☆31Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Code for the paper "Stack Attention: Improving the Ability of Transformers to Model Hierarchical Patterns"☆17Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Updated 2 years ago
- Official code for the paper: "Metadata Archaeology"☆19Updated 2 years ago
- ☆26Updated last year
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆62Updated 3 years ago
- ☆32Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- Here we will test various linear attention designs.☆62Updated last year
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated 2 years ago