erfanzar / EasyDeL
Accelerate, Optimize performance with streamlined training and serving options with JAX.
☆218Updated this week
Alternatives and similar repositories for EasyDeL:
Users that are interested in EasyDeL are comparing it to the libraries listed below
- paralleled/unparalleled computational with FJFormer☆24Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆90Updated 2 months ago
- Flash Attention Implementation with Multiple Backend Support and Sharding This module provides a flexible implementation of Flash Attenti…☆20Updated last month
- JAX implementation of the Llama 2 model☆213Updated 11 months ago
- Inference code for LLaMA models in JAX☆115Updated 7 months ago
- seqax = sequence modeling + JAX☆136Updated 6 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- LoRA for arbitrary JAX models and functions☆135Updated 10 months ago
- Understand and test language model architectures on synthetic tasks.☆175Updated this week
- Muon optimizer for neural networks: >30% extra sample efficiency, <3% wallclock overhead☆210Updated 2 weeks ago
- ☆201Updated 6 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆536Updated this week
- ☆269Updated 6 months ago
- A set of Python scripts that makes your experience on TPU better☆44Updated 6 months ago
- Implementation of the Llama architecture with RLHF + Q-learning☆157Updated last year
- JAX Synergistic Memory Inspector☆164Updated 6 months ago
- Google TPU optimizations for transformers models☆87Updated this week
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆219Updated last month
- ☆75Updated 6 months ago
- ☆53Updated last year
- Prune transformer layers☆67Updated 7 months ago
- supporting pytorch FSDP for optimizers☆75Updated last month
- A simple library for scaling up JAX programs☆129Updated 2 months ago
- Train very large language models in Jax.☆198Updated last year
- ☆180Updated this week
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- A fast implementation of T5/UL2 in PyTorch using Flash Attention☆75Updated this week
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆215Updated this week
- code for training & evaluating Contextual Document Embedding models☆163Updated last week