google / flaxformer
☆338Updated 10 months ago
Alternatives and similar repositories for flaxformer:
Users that are interested in flaxformer are comparing it to the libraries listed below
- ☆182Updated 2 weeks ago
- Implementation of Flash Attention in Jax☆204Updated 11 months ago
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆478Updated 2 weeks ago
- Task-based datasets, preprocessing, and evaluation for sequence models.☆568Updated this week
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆186Updated 2 years ago
- JAX Synergistic Memory Inspector☆168Updated 7 months ago
- jax-triton contains integrations between JAX and OpenAI Triton☆379Updated 3 weeks ago
- Implementation of a Transformer, but completely in Triton☆257Updated 2 years ago
- Train very large language models in Jax.☆202Updated last year
- Inference code for LLaMA models in JAX☆114Updated 9 months ago
- ☆65Updated 2 years ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆542Updated this week
- CLU lets you write beautiful training loops in JAX.☆333Updated 2 weeks ago
- Named tensors with first-class dimensions for PyTorch☆321Updated last year
- Library for reading and processing ML training data.☆386Updated this week
- JMP is a Mixed Precision library for JAX.☆191Updated 3 weeks ago
- LoRA for arbitrary JAX models and functions☆135Updated 11 months ago
- JAX implementation of the Llama 2 model☆215Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆237Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆370Updated last year
- ☆164Updated last year
- ☆284Updated last week
- ☆340Updated 2 weeks ago
- Orbax provides common checkpointing and persistence utilities for JAX users☆339Updated this week
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 5 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆515Updated last year
- Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT☆206Updated 6 months ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆210Updated 2 years ago
- Sequence modeling with Mega.☆298Updated 2 years ago
- ☆58Updated 2 years ago