crypdick / timm-lr-scheduler-explorerLinks
A dashboard for exploring timm learning rate schedulers
☆19Updated last year
Alternatives and similar repositories for timm-lr-scheduler-explorer
Users that are interested in timm-lr-scheduler-explorer are comparing it to the libraries listed below
Sorting:
- Load any clip model with a standardized interface☆22Updated 2 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Implementation of TableFormer, Robust Transformer Modeling for Table-Text Encoding, in Pytorch☆39Updated 3 years ago
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆40Updated this week
- Implementation of a holodeck, written in Pytorch☆18Updated 2 years ago
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆55Updated 8 months ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆23Updated last year
- My explorations into editing the knowledge and memories of an attention network☆35Updated 3 years ago
- An open source implementation of CLIP.☆33Updated 3 years ago
- Utilities for Training Very Large Models☆58Updated last year
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆96Updated 2 years ago
- A scalable implementation of diffusion and flow-matching with XGBoost models, applied to calorimeter data.☆18Updated last year
- Utilities for PyTorch distributed☆25Updated 9 months ago
- Local Attention - Flax module for Jax☆22Updated 4 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆121Updated last year
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated this week
- ☆31Updated last month
- ☆33Updated 5 months ago
- Contrastive Language-Image Pretraining☆38Updated last year
- ImageNet-12k subset of ImageNet-21k (fall11)☆21Updated 2 years ago
- CUDA implementation of autoregressive linear attention, with all the latest research findings☆46Updated 2 years ago
- Implementation of an Attention layer where each head can attend to more than just one token, using coordinate descent to pick topk☆47Updated 2 years ago
- ☆24Updated last year
- ☆34Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆63Updated last week
- JAX implementation ViT-VQGAN☆82Updated 3 years ago
- Implementation of the Remixer Block from the Remixer paper, in Pytorch☆36Updated 4 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago