jax-ml / jax-tpu-embeddingLinks
☆18Updated this week
Alternatives and similar repositories for jax-tpu-embedding
Users that are interested in jax-tpu-embedding are comparing it to the libraries listed below
Sorting:
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆114Updated this week
- ☆52Updated 10 months ago
- Serialize JAX, Flax, Haiku, or Objax model params with 🤗`safetensors`☆45Updated last year
- Experimenting with how best to do multi-host dataloading☆10Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- A simple library for scaling up JAX programs☆139Updated 7 months ago
- ☆60Updated 3 years ago
- ☆18Updated last year
- Machine Learning eXperiment Utilities☆46Updated last year
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- ☆13Updated last week
- ☆186Updated 3 weeks ago
- PyTorch centric eager mode debugger☆47Updated 6 months ago
- Proof-of-concept of global switching between numpy/jax/pytorch in a library.☆18Updated last year
- ☆21Updated 3 months ago
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 2 weeks ago
- Distributed pretraining of large language models (LLMs) on cloud TPU slices, with Jax and Equinox.☆24Updated 8 months ago
- Personal solutions to the Triton Puzzles☆19Updated 11 months ago
- Causal Analysis of Agent Behavior for AI Safety☆18Updated last year
- JAX Implementation of Black Forest Labs' Flux.1 family of models☆34Updated 8 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆56Updated last week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆46Updated this week
- Einsum-like high-level array sharding API for JAX☆35Updated 11 months ago
- ☆13Updated 4 years ago
- Train a SmolLM-style llm on fineweb-edu in JAX/Flax with an assortment of optimizers.☆17Updated 3 months ago
- Multi-framework implementation of Deep Kernel Shaping and Tailored Activation Transformations, which are methods that modify neural netwo…☆70Updated 3 weeks ago
- ☆21Updated last week
- ☆114Updated 3 weeks ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆84Updated last year
- Jax/Flax rewrite of Karpathy's nanoGPT☆57Updated 2 years ago