HomebrewML / revlibLinks
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
☆127Updated 2 years ago
Alternatives and similar repositories for revlib
Users that are interested in revlib are comparing it to the libraries listed below
Sorting:
- Named tensors with first-class dimensions for PyTorch☆331Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆214Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- Another attempt at a long-context / efficient transformer by me☆38Updated 3 years ago
- Contrastive Language-Image Pretraining☆143Updated 2 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- Drop-in replacement for any ResNet with a significantly reduced memory footprint and better representation capabilities☆209Updated last year
- Unofficial JAX implementations of deep learning research papers☆156Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- EfficientNet, MobileNetV3, MobileNetV2, MixNet, etc in JAX w/ Flax Linen and Objax☆128Updated last year
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆177Updated 2 weeks ago
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆208Updated last year
- Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI☆91Updated 3 years ago
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- 🧀 Pytorch code for the Fromage optimiser.☆124Updated 11 months ago
- Implementation of Nyström Self-attention, from the paper Nyströmformer☆135Updated 3 months ago
- ☆131Updated 2 years ago
- JMP is a Mixed Precision library for JAX.☆203Updated 4 months ago
- ☆208Updated 2 years ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆105Updated 4 years ago
- gpu tester detects broken and slow gpus in a cluster☆70Updated 2 years ago
- Pretrained deep learning models for Jax/Flax: StyleGAN2, GPT2, VGG, ResNet, etc.☆254Updated 3 months ago
- ☆68Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 3 years ago
- Implementation of Feedback Transformer in Pytorch☆107Updated 4 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- ☆228Updated 4 months ago