maum-ai / pnlp-mixerLinks
Unofficial PyTorch Implementation for pNLP-Mixer: an Efficient all-MLP Architecture for Language (https://arxiv.org/abs/2202.04350)
☆63Updated 3 years ago
Alternatives and similar repositories for pnlp-mixer
Users that are interested in pnlp-mixer are comparing it to the libraries listed below
Sorting:
- Implementation of RQ Transformer, proposed in the paper "Autoregressive Image Generation using Residual Quantization"☆110Updated 3 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆74Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆100Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆175Updated 3 years ago
- Official code for Wav2Seq☆96Updated 2 years ago
- Sequence modeling with Mega.☆296Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆230Updated 9 months ago
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆56Updated last year
- A PyTorch Implementation of the Luna: Linear Unified Nested Attention☆41Updated 3 years ago
- ☆31Updated last year
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆119Updated 3 years ago
- ☆163Updated 2 years ago
- SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model, Accepted to IEEE SLT 2022☆115Updated 2 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆119Updated 8 months ago
- Unofficial Pytorch Implementation of WaveGrad2☆112Updated 3 years ago
- ☆64Updated 9 months ago
- Implementation of NWT, audio-to-video generation, in Pytorch☆91Updated 3 years ago
- ICASSP 2023 Accepted☆189Updated last year
- Relative Positional Encoding for Transformers with Linear Complexity☆64Updated 3 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 3 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆91Updated 3 years ago
- Randomized Positional Encodings Boost Length Generalization of Transformers☆81Updated last year
- Unofficial PyTorch implementation of "Step-unrolled Denoising Autoencoders for Text Generation"☆24Updated 2 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆118Updated 4 years ago
- A Pytorch Implementations for Various Vector Quantization Methods☆30Updated 3 years ago
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆93Updated last year
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago