lucidrains / PaLM-jax
Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)
☆186Updated 2 years ago
Alternatives and similar repositories for PaLM-jax:
Users that are interested in PaLM-jax are comparing it to the libraries listed below
- ☆58Updated 2 years ago
- ☆65Updated 2 years ago
- Implementation of Flash Attention in Jax☆204Updated 11 months ago
- Train very large language models in Jax.☆202Updated last year
- ☆338Updated 10 months ago
- Amos optimizer with JEstimator lib.☆81Updated 9 months ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- JAX implementation of the Llama 2 model☆215Updated last year
- JAX Synergistic Memory Inspector☆168Updated 7 months ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆237Updated last year
- Inference code for LLaMA models in JAX☆114Updated 9 months ago
- Named tensors with first-class dimensions for PyTorch☆321Updated last year
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆80Updated 3 years ago
- LoRA for arbitrary JAX models and functions☆135Updated 11 months ago
- JMP is a Mixed Precision library for JAX.☆191Updated 3 weeks ago
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- ☆182Updated 2 weeks ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆225Updated 5 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆48Updated last year
- A simple library for scaling up JAX programs☆129Updated 3 months ago
- Contrastive Language-Image Pretraining☆142Updated 2 years ago
- Python Research Framework☆106Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆210Updated 2 years ago
- ☆164Updated last year
- Learned Hyperparameter Optimizers☆58Updated 3 years ago
- Unofficial JAX implementations of deep learning research papers☆153Updated 2 years ago
- Implementation of Gated State Spaces, from the paper "Long Range Language Modeling via Gated State Spaces", in Pytorch☆97Updated last year
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆180Updated 2 years ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆126Updated 2 years ago