jxbz / agdLinks
Automatic gradient descent
☆208Updated last year
Alternatives and similar repositories for agd
Users that are interested in agd are comparing it to the libraries listed below
Sorting:
- Named tensors with first-class dimensions for PyTorch☆331Updated 2 years ago
- ☆114Updated 3 weeks ago
- ☆270Updated 11 months ago
- ☆228Updated 4 months ago
- ☆246Updated last month
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆288Updated 9 months ago
- Neural Networks for JAX☆84Updated 9 months ago
- Unofficial JAX implementations of deep learning research papers☆156Updated 3 years ago
- 🧱 Modula software package☆200Updated 2 months ago
- A functional training loops library for JAX☆88Updated last year
- Hierarchical Associative Memory User Experience☆100Updated last year
- ☆60Updated 3 years ago
- ☆143Updated 2 years ago
- Implementation of Flash Attention in Jax☆213Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆239Updated 2 years ago
- A pure-functional implementation of a machine learning transformer model in Python/JAX☆179Updated last month
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆177Updated 2 weeks ago
- Graph neural networks in JAX.☆67Updated last year
- A simple library for scaling up JAX programs☆139Updated 7 months ago
- JAX Synergistic Memory Inspector☆174Updated 11 months ago
- Compositional Linear Algebra☆477Updated 3 weeks ago
- For optimization algorithm research and development.☆521Updated this week
- LoRA for arbitrary JAX models and functions☆138Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆385Updated this week
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆303Updated last year
- JAX Arrays for human consumption☆93Updated last week
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Updated last year
- ☆131Updated 2 years ago
- ☆157Updated last year