srush / anynpLinks
Proof-of-concept of global switching between numpy/jax/pytorch in a library.
☆18Updated last year
Alternatives and similar repositories for anynp
Users that are interested in anynp are comparing it to the libraries listed below
Sorting:
- ☆21Updated last year
- JAX implementation of the Mistral 7b v0.2 model☆35Updated last year
- nanoGPT-like codebase for LLM training☆102Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆71Updated last month
- 🧱 Modula software package☆216Updated last week
- Maximal Update Parametrization (μP) with Flax & Optax.☆16Updated last year
- gzip Predicts Data-dependent Scaling Laws☆35Updated last year
- A simple library for scaling up JAX programs☆140Updated 9 months ago
- LoRA for arbitrary JAX models and functions☆140Updated last year
- Unofficial but Efficient Implementation of "Mamba: Linear-Time Sequence Modeling with Selective State Spaces" in JAX☆85Updated last year
- ☆37Updated last year
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Minimal but scalable implementation of large language models in JAX☆35Updated 2 weeks ago
- Jax like function transformation engine but micro, microjax☆33Updated 9 months ago
- ☆83Updated last year
- Neural Networks for JAX☆84Updated 10 months ago
- Automatically take good care of your preemptible TPUs☆36Updated 2 years ago
- This is a port of Mistral-7B model in JAX☆32Updated last year
- Running Jax in PyTorch Lightning☆109Updated 7 months ago
- Pytorch implementation of preconditioned stochastic gradient descent (Kron and affine preconditioner, low-rank approximation precondition…☆179Updated last week
- Code accompanying our paper "Feature Learning in Infinite-Width Neural Networks" (https://arxiv.org/abs/2011.14522)☆62Updated 4 years ago
- ☆141Updated last week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆86Updated last year
- Meta-learning inductive biases in the form of useful conserved quantities.☆37Updated 2 years ago
- ☆60Updated 3 years ago
- Graph neural networks in JAX.☆67Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆82Updated 3 years ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆59Updated last week
- Einsum-like high-level array sharding API for JAX☆35Updated last year