cuEquivariance is a math library that is a collective of low-level primitives and tensor ops to accelerate widely-used models, like DiffDock, MACE, Allegro and NEQUIP, based on equivariant neural networks. Also includes kernels for accelerated structure prediction.
☆387Apr 16, 2026Updated this week
Alternatives and similar repositories for cuEquivariance
Users that are interested in cuEquivariance are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Compute neighbor lists for atomistic systems☆75Updated this week
- jax library for E3 Equivariant Neural Networks☆229Apr 1, 2026Updated 2 weeks ago
- 🌟 [NeurIPS '25 Spotlight] Fair and transparent benchmark of machine learning interatomic potentials (MLIPs), beyond basic error metrics …☆98Updated this week
- CUDA implementations of MACE models☆23Aug 19, 2025Updated 8 months ago
- Machine-Learned Interatomic Potential eXploration (mlipx) is designed at BASF for evaluating machine-learned interatomic potentials (MLIP…☆102Jan 28, 2026Updated 2 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- MACE - Fast and accurate machine learning interatomic potentials with higher order equivariant message passing.☆1,145Apr 6, 2026Updated 2 weeks ago
- OpenEquivariance: a fast, open-source GPU JIT kernel generator for the Clebsch-Gordon Tensor Product.☆140Updated this week
- SevenNet - a graph neural network interatomic potential package supporting efficient multi-GPU parallel molecular dynamics simulations.☆241Apr 14, 2026Updated last week
- JAX implementation of the NequIP neural network interatomic potential☆17Feb 24, 2026Updated last month
- Pure C implementation of e3nn☆24Mar 17, 2025Updated last year
- E3x is a JAX library for constructing efficient E(3)-equivariant deep learning architectures built on top of Flax.☆118Apr 7, 2025Updated last year
- Self-describing sparse tensor data format for atomistic machine learning and beyond