lucidrains / h-transformer-1d
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning
☆154Updated 7 months ago
Related projects: ⓘ
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆120Updated 7 months ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆94Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆203Updated last year
- A simple to use pytorch wrapper for contrastive self-supervised learning on any neural network☆119Updated 3 years ago
- ☆163Updated last year
- Code for the paper PermuteFormer☆43Updated 2 years ago
- Axial Positional Embedding for Pytorch☆61Updated 3 years ago
- Implementation of Feedback Transformer in Pytorch☆103Updated 3 years ago
- Implementation of Linformer for Pytorch☆244Updated 8 months ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆40Updated 3 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆221Updated 2 years ago
- Sequence Modeling with Structured State Spaces☆60Updated 2 years ago
- Relative Positional Encoding for Transformers with Linear Complexity☆61Updated 2 years ago
- Implementation of Fast Transformer in Pytorch☆171Updated 3 years ago
- Code repository of the paper "CKConv: Continuous Kernel Convolution For Sequential Data" published at ICLR 2022. https://arxiv.org/abs/21…☆117Updated last year
- Fully featured implementation of Routing Transformer☆283Updated 2 years ago
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆252Updated 3 years ago
- Sequence modeling with Mega.☆296Updated last year
- Implementation of ETSformer, state of the art time-series Transformer, in Pytorch☆145Updated last year
- Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."☆134Updated 3 years ago
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆97Updated 3 years ago
- Experimenting with different regression losses. Implemented in Pytorch.☆143Updated 5 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆106Updated 3 years ago
- Simply Numpy implementation of the FAVOR+ attention mechanism, https://teddykoker.com/2020/11/performers/☆36Updated 3 years ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆74Updated 2 years ago
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆119Updated 11 months ago
- Code for Multi-Head Attention: Collaborate Instead of Concatenate☆148Updated last year
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆222Updated last year
- Unofficial PyTorch implementation of Attention Free Transformer (AFT) layers by Apple Inc.☆226Updated 2 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆116Updated 3 years ago