Francesco215 / autoregressive_diffusionLinks
Video Diffusion Model. Autoregressive, long context, efficient training and inference. WIP
☆34Updated 5 months ago
Alternatives and similar repositories for autoregressive_diffusion
Users that are interested in autoregressive_diffusion are comparing it to the libraries listed below
Sorting:
- Getting crystal-like representations with harmonic loss☆195Updated 10 months ago
- WIP☆93Updated last year
- [ICML 2025] Roll the dice & look before you leap: Going beyond the creative limits of next-token prediction☆84Updated 8 months ago
- Focused on fast experimentation and simplicity☆80Updated last year
- ☆33Updated last year
- Synthetic Alphabet Dataset☆19Updated 10 months ago
- Official Implementation of Dynamic erf (Derf).☆127Updated last month
- $100K or 100 Days: Trade-offs when Pre-Training with Academic Resources☆150Updated 4 months ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆98Updated 6 months ago
- RS-IMLE☆43Updated last year
- ☆111Updated 6 months ago
- Landing repository for the paper "Softpick: No Attention Sink, No Massive Activations with Rectified Softmax"☆86Updated 4 months ago
- Code and weights for the paper "Cluster and Predict Latents Patches for Improved Masked Image Modeling"☆130Updated this week
- Jax Codebase for Evolutionary Strategies at the Hyperscale☆218Updated last month
- σ-GPT: A New Approach to Autoregressive Models☆70Updated last year
- The boundary of neural network trainability is fractal☆221Updated last year
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆25Updated last year
- ☆167Updated 5 months ago
- Supporting code for the blog post on modular manifolds.☆115Updated 4 months ago
- Code accompanying the paper "Generalized Interpolating Discrete Diffusion"☆113Updated 8 months ago
- 📄Small Batch Size Training for Language Models☆80Updated 4 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆132Updated last year
- The official github repo for "Diffusion Language Models are Super Data Learners".☆220Updated 3 months ago
- ☆58Updated last year
- ☆215Updated last month
- Don't just regulate gradients like in Muon, regulate the weights too☆31Updated 6 months ago
- Shaping capabilities with token-level pretraining data filtering☆75Updated last week
- DeMo: Decoupled Momentum Optimization☆198Updated last year
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆103Updated last year
- Explorations into whether a transformer with RL can direct a genetic algorithm to converge faster☆71Updated 8 months ago