itsdaniele / jeometricLinks
Graph neural networks in JAX.
☆67Updated last year
Alternatives and similar repositories for jeometric
Users that are interested in jeometric are comparing it to the libraries listed below
Sorting:
- A functional training loops library for JAX☆88Updated last year
- Neural Networks for JAX☆84Updated 11 months ago
- ☆115Updated this week
- Running Jax in PyTorch Lightning☆111Updated 8 months ago
- Equivariant Steerable CNNs Library for Pytorch https://quva-lab.github.io/escnn/☆30Updated 2 years ago
- Pytorch-like dataloaders for JAX.☆94Updated 2 months ago
- Run PyTorch in JAX. 🤝☆277Updated last week
- Bare-bones implementations of some generative models in Jax: diffusion, normalizing flows, consistency models, flow matching, (beta)-VAEs…☆133Updated last year
- Lightning-like training API for JAX with Flax☆42Updated 8 months ago
- JAX Arrays for human consumption☆105Updated last month
- A Python package of computer vision models for the Equinox ecosystem.☆108Updated last year
- Flow-matching algorithms in JAX☆101Updated last year
- Use Jax functions in Pytorch☆249Updated 2 years ago
- Diffusion models in PyTorch☆107Updated 2 months ago
- A simple library for scaling up JAX programs☆143Updated 9 months ago
- ☆43Updated 2 weeks ago
- Because we don't have enough time to read everything☆89Updated 11 months ago
- Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical Systems [ICML'25]☆100Updated 2 months ago
- ☆125Updated 8 months ago
- Wraps PyTorch code in a JIT-compatible way for JAX. Supports automatically defining gradients for reverse-mode AutoDiff.☆57Updated 3 weeks ago
- A collection of graph neural networks implementations in JAX☆33Updated last year
- ☆60Updated 3 years ago
- Einsum-like high-level array sharding API for JAX☆35Updated last year
- Code used by the "Clifford Group Equivariant Neural Networks" paper.☆84Updated last year
- ☆209Updated last week
- Minimal, lightweight JAX implementations of popular models.☆93Updated this week
- Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.☆172Updated 2 years ago
- Your favourite classical machine learning algos on the GPU/TPU☆20Updated 7 months ago
- Scalable and Stable Parallelization of Nonlinear RNNS☆19Updated 6 months ago
- Named Tensors for Legible Deep Learning in JAX☆201Updated this week