google / flaxformerLinks
☆361Updated last year
Alternatives and similar repositories for flaxformer
Users that are interested in flaxformer are comparing it to the libraries listed below
Sorting:
- ☆188Updated 2 weeks ago
- Implementation of Flash Attention in Jax☆216Updated last year
- JAX Synergistic Memory Inspector☆179Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- Train very large language models in Jax.☆208Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 3 years ago
- ☆255Updated 3 months ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆586Updated 2 weeks ago
- Inference code for LLaMA models in JAX☆119Updated last year
- ☆67Updated 3 years ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆535Updated 2 weeks ago
- Implementation of a Transformer, but completely in Triton☆274Updated 3 years ago
- Language Modeling with the H3 State Space Model☆518Updated last year
- Sequence modeling with Mega.☆300Updated 2 years ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆658Updated this week
- ☆166Updated 2 years ago
- jax-triton contains integrations between JAX and OpenAI Triton☆419Updated 2 weeks ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆241Updated 2 years ago
- Implementation of https://srush.github.io/annotated-s4☆502Updated 2 months ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- ☆61Updated 3 years ago
- LoRA for arbitrary JAX models and functions☆142Updated last year
- JMP is a Mixed Precision library for JAX.☆208Updated 7 months ago
- Annotated version of the Mamba paper☆489Updated last year
- Named tensors with first-class dimensions for PyTorch☆331Updated 2 years ago
- ☆233Updated 7 months ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Understand and test language model architectures on synthetic tasks.☆225Updated 2 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆229Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆380Updated 2 years ago