knsiczarnamagia / kaggle-tutorialLinks
Kick-off repository for starting with Kaggle!
☆12Updated 6 months ago
Alternatives and similar repositories for kaggle-tutorial
Users that are interested in kaggle-tutorial are comparing it to the libraries listed below
Sorting:
- ☆27Updated 6 months ago
- ☆10Updated 7 months ago
- Scikit-learn compatible library for molecular fingerprints☆235Updated 2 weeks ago
- Efficient optimizers☆226Updated this week
- supporting pytorch FSDP for optimizers☆82Updated 6 months ago
- A template for starting reproducible Python machine-learning projects with hardware acceleration. Find an example at https://github.com/C…☆102Updated 3 weeks ago
- Train VAE like a boss☆281Updated 8 months ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆252Updated 3 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆278Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆345Updated 11 months ago
- course.fast.ai 2022 part 2☆499Updated last year
- ☆47Updated 4 months ago
- My take on Flow Matching☆64Updated 5 months ago
- ☆303Updated last year
- A simple implimentation of Bayesian Flow Networks (BFN)☆240Updated last year
- 🧱 Modula software package☆200Updated 3 months ago
- Focused on fast experimentation and simplicity☆75Updated 6 months ago
- ☆36Updated 6 months ago
- Minimal implementation of scalable rectified flow transformers, based on SD3's approach☆592Updated 11 months ago
- ☆20Updated 8 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 9 months ago
- Puzzles for exploring transformers☆350Updated 2 years ago
- Text to Image Latent Diffusion using a Transformer core☆191Updated 10 months ago
- Getting started with diffusion☆658Updated last year
- WIP☆93Updated 10 months ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- ☆270Updated 11 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆426Updated 6 months ago
- ☆436Updated 8 months ago
- When it comes to optimizers, it's always better to be safe than sorry☆244Updated 2 months ago