riverstone496 / awesome-second-order-optimization
☆25Updated last year
Alternatives and similar repositories for awesome-second-order-optimization:
Users that are interested in awesome-second-order-optimization are comparing it to the libraries listed below
- Flow-matching algorithms in JAX☆87Updated 8 months ago
- supporting pytorch FSDP for optimizers☆80Updated 4 months ago
- Implementation of PSGD optimizer in JAX☆30Updated 3 months ago
- 🧱 Modula software package☆187Updated 2 weeks ago
- WIP☆93Updated 8 months ago
- [ICLR 2025] Official PyTorch Implementation of Gated Delta Networks: Improving Mamba2 with Delta Rule☆148Updated 3 weeks ago
- An implementation of PSGD Kron second-order optimizer for PyTorch☆86Updated 2 weeks ago
- ☆31Updated last year
- ☆53Updated last year
- Minimal but scalable implementation of large language models in JAX☆34Updated 5 months ago
- ☆76Updated 9 months ago
- Stick-breaking attention☆50Updated last month
- ☆173Updated 4 months ago
- ☆52Updated 6 months ago
- Experiment of using Tangent to autodiff triton☆78Updated last year
- A MAD laboratory to improve AI architecture designs 🧪☆109Updated 3 months ago
- A simple library for scaling up JAX programs☆134Updated 5 months ago
- [ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"☆89Updated this week
- ☆49Updated last year
- ☆59Updated 9 months ago
- research impl of Native Sparse Attention (2502.11089)☆53Updated last month
- ☆97Updated this week
- ☆59Updated 4 months ago
- ☆30Updated 4 months ago
- Explorations into the recently proposed Taylor Series Linear Attention☆97Updated 7 months ago
- A basic pure pytorch implementation of flash attention☆16Updated 5 months ago
- JAX bindings for Flash Attention v2☆89Updated 8 months ago
- ☆27Updated 9 months ago
- Implementation of Denoising Diffusion Probabilistic Models (DDPM) in JAX and Flax.☆20Updated last year
- Accelerated First Order Parallel Associative Scan☆180Updated 7 months ago