lucidrains / Adan-pytorch
Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch
☆250Updated 2 years ago
Alternatives and similar repositories for Adan-pytorch:
Users that are interested in Adan-pytorch are comparing it to the libraries listed below
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- A library to inspect and extract intermediate layers of PyTorch models.☆470Updated 2 years ago
- Implementation of a U-net complete with efficient attention as well as the latest research findings☆271Updated 8 months ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆125Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆210Updated last year
- Named tensors with first-class dimensions for PyTorch☆322Updated last year
- Implementation of Recurrent Interface Network (RIN), for highly efficient generation of images and video without cascading networks, in P…☆197Updated 11 months ago
- An alternative to convolution in neural networks☆254Updated 9 months ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆181Updated 2 years ago
- Unofficial JAX implementations of deep learning research papers☆152Updated 2 years ago
- ☆72Updated 2 years ago
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆202Updated last year
- Convert scikit-learn models to PyTorch modules☆159Updated 8 months ago
- Code release for "Dropout Reduces Underfitting"☆311Updated last year
- ☆164Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆184Updated 2 years ago
- Minimal standalone example of diffusion model☆154Updated 2 years ago
- Memory mapped numpy arrays of varying shapes☆291Updated 6 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆370Updated last year
- Learning to Initialize Neural Networks for Stable and Efficient Training☆138Updated 2 years ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆124Updated 11 months ago
- Just some miscellaneous utility functions / decorators / modules related to Pytorch and Accelerate to help speed up implementation of new…☆119Updated 5 months ago
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆84Updated 3 years ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆224Updated last month
- Implementing the Denoising Diffusion Probabilistic Model in Flax☆144Updated 2 years ago
- Code for our NeurIPS 2022 paper☆366Updated 2 years ago
- A PyTorch implementation of Perceiver, Perceiver IO and Perceiver AR with PyTorch Lightning scripts for distributed training☆446Updated last year
- ☆197Updated 2 years ago
- Optimized library for large-scale extraction of frames and audio from video.☆202Updated last year