fkodom / fft-conv-pytorchLinks
Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.
☆512Updated 2 years ago
Alternatives and similar repositories for fft-conv-pytorch
Users that are interested in fft-conv-pytorch are comparing it to the libraries listed below
Sorting:
- DCT (discrete cosine transform) functions for pytorch☆634Updated 3 years ago
- Implementation of Axial attention - attending to multi-dimensional data efficiently☆389Updated 4 years ago
- A simple way to keep track of an Exponential Moving Average (EMA) version of your Pytorch model☆621Updated 11 months ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆439Updated last year
- An All-MLP solution for Vision, from Google AI☆1,052Updated 4 months ago
- Learning Rate Warmup in PyTorch☆413Updated 5 months ago
- Implementation of a U-net complete with efficient attention as well as the latest research findings☆288Updated last year
- Implementation of ConvMixer for "Patches Are All You Need? 🤷"☆1,077Updated 3 years ago
- Seamless analysis of your PyTorch models (RAM usage, FLOPs, MACs, receptive field, etc.)☆222Updated 7 months ago
- Differentiable fast wavelet transforms in PyTorch with GPU support.☆392Updated 2 months ago
- Code repository of the paper "Modelling Long Range Dependencies in ND: From Task-Specific to a General Purpose CNN" https://arxiv.org/abs…☆183Updated 6 months ago
- Implementation of Linformer for Pytorch☆301Updated last year
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)☆537Updated last year
- An implementation of the efficient attention module.☆325Updated 4 years ago
- This is an official pytorch implementation of Fast Fourier Convolution.☆379Updated last year
- Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision☆217Updated 4 years ago
- 1D interpolation for pytorch☆175Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆811Updated last year
- [NeurIPS 2021] [T-PAMI] Global Filter Networks for Image Classification☆489Updated 2 years ago
- An implementation of local windowed attention for language modeling☆486Updated 4 months ago
- Pytorch implementation of 2D Discrete Wavelet (DWT) and Dual Tree Complex Wavelet Transforms (DTCWT) and a DTCWT based ScatterNet☆1,140Updated 2 years ago
- Estimate/count FLOPS for a given neural network using pytorch☆306Updated 3 years ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Models☆803Updated 5 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆383Updated 2 years ago
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,155Updated last year
- Unofficial implementation of Google's FNet: Mixing Tokens with Fourier Transforms☆260Updated 4 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆216Updated 2 years ago
- ☆466Updated 2 years ago
- Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLP, S2MLPv2, RaftMLP, HireMLP, ConvMLP, AS-MLP, SparseMLP, Co…☆170Updated 3 years ago
- A high-level toolbox for using complex valued neural networks in PyTorch☆729Updated last year