erfanzar / EasyDeLLinks
Accelerate, Optimize performance with streamlined training and serving options with JAX.
☆276Updated this week
Alternatives and similar repositories for EasyDeL
Users that are interested in EasyDeL are comparing it to the libraries listed below
Sorting:
- (EasyDel Former) is a utility library designed to simplify and enhance the development in JAX☆27Updated last week
- JAX implementation of the Llama 2 model☆218Updated last year
- ☆269Updated 10 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆591Updated this week
- Google TPU optimizations for transformers models☆112Updated 4 months ago
- Inference code for LLaMA models in JAX☆118Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆147Updated 3 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆132Updated last month
- A Jax-based library for building transformers, includes implementations of GPT, Gemma, LlaMa, Mixtral, Whisper, SWin, ViT and more.☆287Updated 9 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆190Updated 10 months ago
- ☆190Updated 3 months ago
- A flexible and efficient implementation of Flash Attention 2.0 for JAX, supporting multiple backends (GPU/TPU/CPU) and platforms (Triton/…☆24Updated 3 months ago
- A set of Python scripts that makes your experience on TPU better☆54Updated 11 months ago
- seqax = sequence modeling + JAX☆155Updated 2 months ago
- ☆118Updated 2 weeks ago
- Understand and test language model architectures on synthetic tasks.☆204Updated this week
- Fast bare-bones BPE for modern tokenizer training☆158Updated 2 months ago
- prime-rl is a codebase for decentralized async RL training at scale☆318Updated this week
- Pax is a Jax-based machine learning framework for training large scale models. Pax allows for advanced and fully configurable experimenta…☆499Updated 2 weeks ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆231Updated 7 months ago
- supporting pytorch FSDP for optimizers☆80Updated 6 months ago
- ☆182Updated this week
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆385Updated last month
- A MAD laboratory to improve AI architecture designs 🧪☆118Updated 5 months ago
- nanoGPT-like codebase for LLM training☆94Updated 3 weeks ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆202Updated last month
- Deep learning library implemented from scratch in numpy. Mixtral, Mamba, LLaMA, GPT, ResNet, and other experiments.☆51Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆190Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆259Updated 10 months ago