srush / anynpLinks
Proof-of-concept of global switching between numpy/jax/pytorch in a library.
☆18Updated last year
Alternatives and similar repositories for anynp
Users that are interested in anynp are comparing it to the libraries listed below
Sorting:
- ☆21Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- Experiment of using Tangent to autodiff triton☆80Updated last year
- nanoGPT-like codebase for LLM training☆110Updated 2 weeks ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆188Updated last month
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆63Updated 4 years ago
- 🧱 Modula software package☆303Updated 3 months ago
- Running Jax in PyTorch Lightning☆114Updated 11 months ago
- gzip Predicts Data-dependent Scaling Laws☆34Updated last year
- ☆60Updated 3 years ago
- ☆61Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- This is a port of Mistral-7B model in JAX☆32Updated last year
- Jax like function transformation engine but micro, microjax☆33Updated last year
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆62Updated last month
- Meta-learning inductive biases in the form of useful conserved quantities.☆38Updated 3 years ago
- ☆91Updated last year
- ☆15Updated last month
- Named Tensors for Legible Deep Learning in JAX☆211Updated 2 weeks ago
- A simple library for scaling up JAX programs☆144Updated 2 weeks ago
- some common Huggingface transformers in maximal update parametrization (µP)☆86Updated 3 years ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆74Updated 4 months ago
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆89Updated last year
- A functional training loops library for JAX☆88Updated last year
- Neural Networks for JAX☆84Updated last year
- Functional local implementations of main model parallelism approaches☆96Updated 2 years ago
- ☆166Updated 2 years ago
- ☆38Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year