jxbz / agdLinks
Automatic gradient descent
☆215Updated 2 years ago
Alternatives and similar repositories for agd
Users that are interested in agd are comparing it to the libraries listed below
Sorting:
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆295Updated last year
- ☆115Updated last month
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆173Updated 2 years ago
- ☆247Updated 4 months ago
- LoRA for arbitrary JAX models and functions☆141Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 3 years ago
- Neural Networks for JAX☆84Updated last year
- Unofficial JAX implementations of deep learning research papers☆158Updated 3 years ago
- ☆234Updated 8 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆188Updated 2 weeks ago
- ☆310Updated last year
- JAX Synergistic Memory Inspector☆179Updated last year
- Running Jax in PyTorch Lightning☆113Updated 10 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆400Updated last week
- git extension for {collaborative, communal, continual} model development☆215Updated 11 months ago
- ☆283Updated last year
- Named tensors with first-class dimensions for PyTorch☆331Updated 2 years ago
- Train very large language models in Jax.☆209Updated 2 years ago
- A Python package of computer vision models for the Equinox ecosystem.☆109Updated last year
- JAX implementation of the Llama 2 model☆216Updated last year
- JMP is a Mixed Precision library for JAX.☆209Updated 9 months ago
- A simple library for scaling up JAX programs☆144Updated 11 months ago
- A functional training loops library for JAX☆88Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆242Updated 2 years ago
- Hierarchical Associative Memory User Experience☆104Updated 3 months ago
- Implementation of Flash Attention in Jax☆219Updated last year
- Named Tensors for Legible Deep Learning in JAX☆211Updated last week
- ☆144Updated 2 years ago
- An interactive exploration of Transformer programming.☆269Updated last year
- Code implementing "Efficient Parallelization of a Ubiquitious Sequential Computation" (Heinsen, 2023)☆95Updated 10 months ago