huggingface / picotron
Minimalistic 4D-parallelism distributed training framework for education purpose
☆1,346Updated last month
Alternatives and similar repositories for picotron:
Users that are interested in picotron are comparing it to the libraries listed below
- Minimalistic large language model 3D-parallelism training☆1,836Updated this week
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,277Updated 2 weeks ago
- NanoGPT (124M) in 3 minutes☆2,520Updated last week
- Puzzles for learning Triton☆1,603Updated 5 months ago
- A PyTorch native library for large-scale model training☆3,665Updated this week
- nanoGPT style version of Llama 3.1☆1,361Updated 8 months ago
- A bibliography and survey of the papers surrounding o1☆1,190Updated 5 months ago
- What would you do with 1000 H100s...☆1,043Updated last year
- FlashInfer: Kernel Library for LLM Serving☆2,764Updated this week
- UNet diffusion model in pure CUDA☆601Updated 10 months ago
- Best practices & guides on how to write distributed pytorch training code☆406Updated 2 months ago
- Tile primitives for speedy kernels☆2,312Updated this week
- Code for BLT research paper☆1,546Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,482Updated this week
- Muon is Scalable for LLM Training☆1,039Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆867Updated this week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆788Updated this week
- Building blocks for foundation models.☆487Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆536Updated last week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆796Updated 4 months ago
- Helpful tools and examples for working with flex-attention☆746Updated 3 weeks ago
- The Multilayer Perceptron Language Model☆547Updated 8 months ago
- PyTorch native quantization and sparsity for training and inference☆2,015Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,301Updated this week
- Training Large Language Model to Reason in a Continuous Latent Space☆1,094Updated 3 months ago
- GPU programming related news and material links☆1,480Updated 4 months ago
- The Autograd Engine☆603Updated 7 months ago
- Recipes to scale inference-time compute of open models☆1,066Updated 2 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models in Torch and Triton☆2,344Updated this week
- Ring attention implementation with flash attention☆757Updated 3 weeks ago