google-deepmind / alphatensor
☆2,736Updated last year
Alternatives and similar repositories for alphatensor:
Users that are interested in alphatensor are comparing it to the libraries listed below
- Monte Carlo tree search in JAX☆2,467Updated 2 weeks ago
- JAX-based neural network library☆3,018Updated this week
- Flax is a neural network library for JAX that is designed for flexibility.☆6,506Updated this week
- Optax is a gradient processing and optimization library for JAX.☆1,875Updated this week
- JAX - A curated list of resources https://github.com/google/jax☆1,788Updated 2 months ago
- Elegant easy-to-use neural networks + scientific computing in JAX. https://docs.kidger.site/equinox/☆2,324Updated this week
- A Graph Neural Network Library in Jax☆1,425Updated last year
- maximal update parametrization (µP)☆1,498Updated 9 months ago
- functorch is JAX-like composable function transforms for PyTorch.☆1,422Updated this week
- Tensors, for human consumption☆1,246Updated 5 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,630Updated 3 weeks ago
- ☆776Updated 2 weeks ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,331Updated 10 months ago
- Implementation of Hinton's forward-forward (FF) algorithm - an alternative to back-propagation☆1,478Updated last year
- Reference implementation of the Transformer architecture optimized for Apple Neural Engine (ANE)☆2,614Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,306Updated 2 weeks ago
- Python-based research interface for blackbox and hyperparameter optimization, based on the internal Google Vizier Service.☆1,553Updated last week
- Structured state space sequence models☆2,611Updated 9 months ago
- Training and serving large-scale neural networks with auto parallelization.☆3,129Updated last year
- Library for reading and writing large multi-dimensional arrays.☆1,401Updated this week
- Train to 94% on CIFAR-10 in <6.3 seconds on a single A100. Or ~95.79% in ~110 seconds (or less!)☆1,252Updated 4 months ago
- ☆1,432Updated 2 years ago
- Python 3.8+ toolbox for submitting jobs to Slurm☆1,415Updated this week
- Hardware accelerated, batchable and differentiable optimizers in JAX.☆960Updated last week
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,759Updated this week
- ☆709Updated last year
- Advanced evolutionary computation library built directly on top of PyTorch, created at NNAISENSE.☆1,054Updated 2 weeks ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,564Updated last year
- Type annotations and runtime checking for shape and dtype of JAX/NumPy/PyTorch/etc. arrays. https://docs.kidger.site/jaxtyping/☆1,385Updated this week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,112Updated this week