maum-ai / pnlp-mixer
Unofficial PyTorch Implementation for pNLP-Mixer: an Efficient all-MLP Architecture for Language (https://arxiv.org/abs/2202.04350)
☆63Updated 3 years ago
Alternatives and similar repositories for pnlp-mixer:
Users that are interested in pnlp-mixer are comparing it to the libraries listed below
- Implementation of RQ Transformer, proposed in the paper "Autoregressive Image Generation using Residual Quantization"☆102Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆173Updated 3 years ago
- A PyTorch Implementation of the Luna: Linear Unified Nested Attention☆41Updated 3 years ago
- Unofficial PyTorch implementation of Google's FNet: Mixing Tokens with Fourier Transforms. With checkpoints.☆73Updated 2 years ago
- Sequence modeling with Mega.☆296Updated 2 years ago
- Official code for Wav2Seq☆96Updated 2 years ago
- Unofficial Pytorch Implementation of WaveGrad2☆112Updated 3 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆99Updated 2 years ago
- Implementation of a Light Recurrent Unit in Pytorch☆47Updated 5 months ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆73Updated 2 years ago
- Implementation of the proposed Adam-atan2 from Google Deepmind in Pytorch☆101Updated 3 months ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 2 years ago
- My attempts at applying Soundstream design on learned tokenization of text and then applying hierarchical attention to text generation☆83Updated 5 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆226Updated 6 months ago
- ICASSP 2023 Accepted☆189Updated 10 months ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆60Updated 2 years ago
- SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model, Accepted to IEEE SLT 2022☆112Updated 2 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆79Updated 11 months ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- ☆30Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆118Updated 3 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 2 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆62Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 2 years ago
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- Implementation of Agent Attention in Pytorch☆90Updated 8 months ago
- Simple torch.nn.module implementation of Alias-Free-GAN style filter and resample☆88Updated 2 years ago
- ☆74Updated 4 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆112Updated 4 years ago
- ☆64Updated 6 months ago