LUMIA-Group / FourierTransformerLinks
The official Pytorch implementation of the paper "Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator" (ACL 2023 Findings)
☆40Updated last year
Alternatives and similar repositories for FourierTransformer
Users that are interested in FourierTransformer are comparing it to the libraries listed below
Sorting:
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆226Updated last year
- Official Code for ICLR 2024 Paper: Non-negative Contrastive Learning☆45Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆75Updated 8 months ago
- ☆148Updated last year
- ☆197Updated last year
- A repository for DenseSSMs☆88Updated last year
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last week
- DeciMamba: Exploring the Length Extrapolation Potential of Mamba (ICLR 2025)☆31Updated 5 months ago
- [ICLR 2025] Official Code Release for Explaining Modern Gated-Linear RNNs via a Unified Implicit Attention Formulation☆45Updated 6 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆39Updated 11 months ago
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆66Updated last year
- ☆34Updated 2 years ago
- Implementation of Agent Attention in Pytorch☆91Updated last year
- HGRN2: Gated Linear RNNs with State Expansion☆54Updated last year
- ☆140Updated last year
- [NeurIPS 2022] Your Transformer May Not be as Powerful as You Expect (official implementation)☆33Updated 2 years ago
- ☆72Updated 7 months ago
- MambaFormer in-context learning experiments and implementation for https://arxiv.org/abs/2402.04248☆56Updated last year
- Awesome list of papers that extend Mamba to various applications.☆137Updated 3 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆82Updated last year
- ☆16Updated 2 years ago
- Towards Understanding the Mixture-of-Experts Layer in Deep Learning☆31Updated last year
- [ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling☆80Updated last year
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆158Updated 7 months ago
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆78Updated last year
- ☆47Updated last year
- ☆106Updated 2 years ago
- Mixture of Attention Heads☆49Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆125Updated last year
- toy reproduction of Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts☆22Updated last year