huggingface / picotron
Minimalistic 4D-parallelism distributed training framework for education purpose
☆935Updated 2 weeks ago
Alternatives and similar repositories for picotron:
Users that are interested in picotron are comparing it to the libraries listed below
- Minimalistic large language model 3D-parallelism training☆1,701Updated this week
- Helpful tools and examples for working with flex-attention☆689Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆524Updated last month
- What would you do with 1000 H100s...☆1,016Updated last year
- Puzzles for learning Triton☆1,508Updated 4 months ago
- A bibliography and survey of the papers surrounding o1☆1,180Updated 4 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆232Updated 2 weeks ago
- LLM KV cache compression made easy☆440Updated this week
- Best practices & guides on how to write distributed pytorch training code☆368Updated 3 weeks ago
- Muon optimizer: +>30% sample efficiency with <3% wallclock overhead☆505Updated last week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆506Updated 4 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆732Updated 2 months ago
- Building blocks for foundation models.☆464Updated last year
- Tile primitives for speedy kernels☆2,153Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,313Updated this week
- A repository for research on medium sized language models.☆493Updated 2 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆855Updated last month
- Recipes to scale inference-time compute of open models☆1,041Updated 3 weeks ago
- ☆158Updated last month
- Training Large Language Model to Reason in a Continuous Latent Space☆985Updated last month
- UNet diffusion model in pure CUDA☆600Updated 8 months ago
- Large Context Attention☆690Updated last month
- Code for BLT research paper☆1,436Updated last week
- Ring attention implementation with flash attention☆711Updated 3 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆307Updated 3 months ago
- A throughput-oriented high-performance serving framework for LLMs☆766Updated 6 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,111Updated this week
- Efficient LLM Inference over Long Sequences☆365Updated last month