google / flaxformerLinks
☆364Updated last year
Alternatives and similar repositories for flaxformer
Users that are interested in flaxformer are comparing it to the libraries listed below
Sorting:
- ☆191Updated last week
- Implementation of Flash Attention in Jax☆222Updated last year
- JAX Synergistic Memory Inspector☆183Updated last year
- Train very large language models in Jax.☆210Updated 2 years ago
- JAX implementation of the Llama 2 model☆215Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆189Updated 3 years ago
- Inference code for LLaMA models in JAX☆120Updated last year
- ☆259Updated 6 months ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆593Updated last month
- Implementation of a Transformer, but completely in Triton☆277Updated 3 years ago
- ☆66Updated 3 years ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆543Updated last week
- ☆167Updated 2 years ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆687Updated this week
- Sequence modeling with Mega.☆302Updated 2 years ago
- Language Modeling with the H3 State Space Model☆521Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- jax-triton contains integrations between JAX and OpenAI Triton☆436Updated 3 weeks ago
- ☆63Updated 3 years ago
- JMP is a Mixed Precision library for JAX.☆210Updated 11 months ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- LoRA for arbitrary JAX models and functions☆143Updated last year
- Neural Networks and the Chomsky Hierarchy☆212Updated last year
- Everything you want to know about Google Cloud TPU☆555Updated last year
- Implementation of https://srush.github.io/annotated-s4☆510Updated 6 months ago
- ☆287Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆242Updated 2 years ago
- ☆314Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆87Updated 3 years ago