Implementation of Multistream Transformers in Pytorch
☆54Jul 31, 2021Updated 4 years ago
Alternatives and similar repositories for multistream-transformers
Users that are interested in multistream-transformers are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆49Jan 27, 2022Updated 4 years ago
- Implementation of N-Grammer, augmenting Transformers with latent n-grams, in Pytorch☆76Dec 4, 2022Updated 3 years ago
- An implementation of (Induced) Set Attention Block, from the Set Transformers paper☆67Jan 10, 2023Updated 3 years ago
- Implementation of Fast Transformer in Pytorch☆176Aug 26, 2021Updated 4 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆48Nov 30, 2021Updated 4 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆46Mar 3, 2021Updated 5 years ago
- ☆27Mar 13, 2021Updated 5 years ago
- Implementation of a Transformer, but completely in Triton☆279Apr 5, 2022Updated 3 years ago
- Implementation of a holodeck, written in Pytorch☆18Nov 1, 2023Updated 2 years ago
- A simple Transformer where the softmax has been replaced with normalization☆20Sep 11, 2020Updated 5 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆82Oct 30, 2021Updated 4 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆145Mar 24, 2025Updated last year
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆166Feb 12, 2024Updated 2 years ago
- 🚀 Implementation of easy-to-use 3D parallelism based on Huggingface Transformers & Microsoft DeepSpeed☆31Feb 5, 2022Updated 4 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- Pytorch reimplementation of Molecule Attention Transformer, which uses a transformer to tackle the graph-like structure of molecules☆58Dec 2, 2020Updated 5 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆201Mar 24, 2021Updated 5 years ago
- Based on https://github.com/fatchord/WaveRNN☆24May 3, 2020Updated 5 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30May 31, 2022Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Apr 18, 2022Updated 3 years ago
- Another attempt at a long-context / efficient transformer by me☆38Apr 11, 2022Updated 3 years ago
- Code of our Neurips2020 paper "Auto Learning Attention", coming soon☆22Apr 14, 2021Updated 4 years ago
- Unsupervised spoken sentence embeddings☆14Dec 14, 2022Updated 3 years ago
- Usable implementation of Mogrifier, a circuit for enhancing LSTMs and potentially other networks, from Deepmind☆22Jun 9, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Source code for "N-ary Constituent Tree Parsing with Recursive Semi-Markov Model" published at ACL 2021☆10May 27, 2021Updated 4 years ago
- A PyTorch Implementation of the Luna: Linear Unified Nested Attention☆41Jul 29, 2021Updated 4 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆207Aug 26, 2023Updated 2 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆118Jul 15, 2021Updated 4 years ago
- Implementation of Tranception, an attention network, paired with retrieval, that is SOTA for protein fitness prediction☆32Jun 19, 2022Updated 3 years ago
- JAX implementation ViT-VQGAN☆82Sep 21, 2022Updated 3 years ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆41Dec 21, 2025Updated 3 months ago
- Code for Analyzing Redundancy in Pretrained Transformer Models accepted at EMNLP 2020☆14Oct 6, 2020Updated 5 years ago
- A GPT, made only of MLPs, in Jax☆59Jun 23, 2021Updated 4 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- ☆11Oct 3, 2021Updated 4 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- Implementation of Cross Transformer for spatially-aware few-shot transfer, in Pytorch☆54Mar 30, 2021Updated 4 years ago
- zero-vocab or low-vocab embeddings☆18Jul 17, 2022Updated 3 years ago
- This is project for korean auto spacing☆12Aug 3, 2020Updated 5 years ago
- Implementation of Discrete Key / Value Bottleneck, in Pytorch☆88Jul 9, 2023Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆19Sep 12, 2020Updated 5 years ago