microsoft / ResiDual
ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802
☆87Updated last year
Related projects ⓘ
Alternatives and complementary repositories for ResiDual
- Implementation of GateLoop Transformer in Pytorch and Jax☆86Updated 5 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆95Updated last year
- Implementation of Infini-Transformer in Pytorch☆104Updated last month
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆43Updated last year
- Implementation of Agent Attention in Pytorch☆86Updated 4 months ago
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆72Updated 3 months ago
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆49Updated last year
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆108Updated last month
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆43Updated last month
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆57Updated last year
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆61Updated 6 months ago
- Sequence Modeling with Structured State Spaces☆60Updated 2 years ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆66Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆203Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆127Updated 6 months ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆47Updated 2 years ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆53Updated 6 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆31Updated 10 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆90Updated 3 months ago
- Implementation of 🌻 Mirasol, SOTA Multimodal Autoregressive model out of Google Deepmind, in Pytorch☆88Updated 10 months ago
- Language Quantized AutoEncoders☆94Updated last year
- Implementation of Zorro, Masked Multimodal Transformer, in Pytorch☆95Updated last year
- ☆76Updated 7 months ago
- My explorations into editing the knowledge and memories of an attention network☆34Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆205Updated 3 months ago
- Unofficial PyTorch implementation of "Step-unrolled Denoising Autoencoders for Text Generation"☆23Updated 2 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆84Updated 2 years ago
- ☆75Updated last year