alex-matton / causal-transformer-decoderLinks
☆72Updated 4 years ago
Alternatives and similar repositories for causal-transformer-decoder
Users that are interested in causal-transformer-decoder are comparing it to the libraries listed below
Sorting:
- Relative Positional Encoding for Transformers with Linear Complexity☆65Updated 3 years ago
- Implementation of Memformer, a Memory-augmented Transformer, in Pytorch☆123Updated 4 years ago
- Implementation of Long-Short Transformer, combining local and global inductive biases for attention over long sequences, in Pytorch☆120Updated 4 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆101Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆206Updated 2 years ago
- ☆164Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆49Updated 5 years ago
- Pytorch implementation of Compressive Transformers, from Deepmind☆162Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Updated 3 years ago
- Axial Positional Embedding for Pytorch☆83Updated 8 months ago
- [EMNLP'19] Summary for Transformer Understanding☆53Updated 5 years ago
- Implementation of Imputer: Sequence Modelling via Imputation and Dynamic Programming in PyTorch☆58Updated 5 years ago
- Code for "Finetuning Pretrained Transformers into Variational Autoencoders"☆39Updated 3 years ago
- Code for the paper PermuteFormer☆42Updated 4 years ago
- Sequence Modeling with Structured State Spaces☆66Updated 3 years ago
- Representation learning for NLP @ JSALT19☆39Updated 5 years ago
- Levenshtein edit-distance on PyTorch and CUDA☆95Updated 2 years ago
- Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch☆42Updated 2 years ago
- A Pytorch Implementations for Various Vector Quantization Methods☆33Updated 4 years ago
- A library for making Transformer Variational Autoencoders. (Extends the Huggingface/transformers library.)☆142Updated 4 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 4 years ago
- Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)☆63Updated 3 years ago
- Implementation of Fast Transformer in Pytorch☆177Updated 4 years ago
- Implementation of Insertion-deletion Denoising Diffusion Probabilistic Models☆30Updated 3 years ago
- Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.☆28Updated 4 years ago
- Contrastive Language-Audio Pretraining☆88Updated 3 years ago
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning☆165Updated last year
- Standalone Product Key Memory module in Pytorch - for augmenting Transformer models☆83Updated last year
- Sequence Modeling with Multiresolution Convolutional Memory (ICML 2023)☆127Updated 2 years ago