ludocomito / learning-diffusionLinks
A practical guide to diffusion models, implemented from scratch.
☆164Updated this week
Alternatives and similar repositories for learning-diffusion
Users that are interested in learning-diffusion are comparing it to the libraries listed below
Sorting:
- ☆211Updated last year
- ☆42Updated 11 months ago
- ☆46Updated 8 months ago
- ☆532Updated 4 months ago
- Getting crystal-like representations with harmonic loss☆192Updated 8 months ago
- This repository contain the simple llama3 implementation in pure jax.☆70Updated 9 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆97Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆148Updated 2 months ago
- ☆21Updated last year
- Dion optimizer algorithm☆403Updated this week
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆105Updated 2 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆112Updated 2 months ago
- Minimal GPT (~350 lines with a simple task to test it)☆63Updated 2 weeks ago
- NanoGPT-speedrunning for the poor T4 enjoyers☆73Updated 7 months ago
- ☆152Updated last month
- Deep Learning, an Energy Approach☆224Updated 6 months ago
- A package for defining deep learning models using categorical algebraic expressions.☆61Updated last year
- Simple Transformer in Jax☆139Updated last year
- DeMo: Decoupled Momentum Optimization☆197Updated last year
- ☆128Updated 2 weeks ago
- 🧱 Modula software package☆309Updated 3 months ago
- NUS CS5242 Neural Networks and Deep Learning, Xavier Bresson, 2025☆403Updated 7 months ago
- Large multi-modal models (L3M) pre-training.☆222Updated 2 months ago
- Low memory full parameter finetuning of LLMs☆54Updated 4 months ago
- The boundary of neural network trainability is fractal☆221Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- ☆213Updated this week
- ☆56Updated last year
- A really tiny autograd engine☆96Updated 6 months ago