erfanzar / EasyDeLLinks
Accelerate, Optimize performance with streamlined training and serving options with JAX.
☆296Updated this week
Alternatives and similar repositories for EasyDeL
Users that are interested in EasyDeL are comparing it to the libraries listed below
Sorting:
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆639Updated this week
- ☆275Updated last year
- Inference code for LLaMA models in JAX☆118Updated last year
- JAX implementation of the Llama 2 model☆219Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month
- A set of Python scripts that makes your experience on TPU better☆54Updated last year
- ☆209Updated 6 months ago
- ☆144Updated last week
- Fast bare-bones BPE for modern tokenizer training☆164Updated last month
- Understand and test language model architectures on synthetic tasks.☆221Updated last month
- seqax = sequence modeling + JAX☆165Updated 3 weeks ago
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆290Updated 11 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆124Updated 7 months ago
- 🧱 Modula software package☆220Updated 2 weeks ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Train very large language models in Jax.☆206Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆238Updated 2 months ago
- ☆363Updated this week
- Normalized Transformer (nGPT)☆186Updated 8 months ago
- Cost aware hyperparameter tuning algorithm☆167Updated last year
- A JAX-native LLM Post-Training Library☆92Updated this week
- nanoGPT-like codebase for LLM training☆102Updated 2 months ago
- A repository for research on medium sized language models.☆510Updated 2 months ago
- An extension of the nanoGPT repository for training small MOE models.☆172Updated 5 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 months ago
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆258Updated 3 weeks ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆269Updated last year
- ☆83Updated last year
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆466Updated this week