google / flaxformer
☆343Updated 11 months ago
Alternatives and similar repositories for flaxformer:
Users that are interested in flaxformer are comparing it to the libraries listed below
- ☆184Updated last month
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 2 years ago
- Train very large language models in Jax.☆203Updated last year
- Implementation of Flash Attention in Jax☆206Updated last year
- JAX Synergistic Memory Inspector☆171Updated 8 months ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆483Updated last week
- Task-based datasets, preprocessing, and evaluation for sequence models.☆571Updated this week
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆557Updated this week
- jax-triton contains integrations between JAX and OpenAI Triton☆386Updated last week
- JAX implementation of the Llama 2 model☆216Updated last year
- CLU lets you write beautiful training loops in JAX.☆335Updated 2 weeks ago
- Inference code for LLaMA models in JAX☆116Updated 10 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆226Updated 6 months ago
- Implementation of a Transformer, but completely in Triton☆260Updated 2 years ago
- Named tensors with first-class dimensions for PyTorch☆321Updated last year
- ☆165Updated last year
- ☆290Updated this week
- Sequence modeling with Mega.☆295Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆518Updated last year
- ☆214Updated 8 months ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆236Updated last year
- ☆67Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆312Updated last year
- Amos optimizer with JEstimator lib.☆81Updated 10 months ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆111Updated last year
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- LoRA for arbitrary JAX models and functions☆135Updated last year
- JMP is a Mixed Precision library for JAX.☆193Updated last month
- Annotated version of the Mamba paper☆475Updated last year
- Scaling Data-Constrained Language Models☆333Updated 6 months ago