huggingface / picotron
Minimalistic 4D-parallelism distributed training framework for education purpose
☆544Updated 3 weeks ago
Alternatives and similar repositories for picotron:
Users that are interested in picotron are comparing it to the libraries listed below
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆505Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆1,377Updated this week
- Best practices & guides on how to write distributed pytorch training code☆329Updated 3 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆213Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆488Updated 2 months ago
- Helpful tools and examples for working with flex-attention☆568Updated this week
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆829Updated last month
- LLM KV cache compression made easy☆289Updated this week
- A bibliography and survey of the papers surrounding o1☆1,029Updated last month
- Puzzles for learning Triton☆1,282Updated last month
- UNet diffusion model in pure CUDA☆591Updated 6 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆750Updated last month
- A repository for research on medium sized language models.☆485Updated last month
- What would you do with 1000 H100s...☆934Updated last year
- Building blocks for foundation models.☆431Updated last year
- For optimization algorithm research and development.☆482Updated this week
- PyTorch per step fault tolerance (actively under development)☆209Updated this week
- Large Context Attention☆665Updated 4 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆842Updated this week
- Recipes to scale inference-time compute of open models☆899Updated this week
- Scalable toolkit for efficient model alignment☆665Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆679Updated last week
- ☆262Updated 5 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆729Updated this week
- Annotated version of the Mamba paper☆467Updated 10 months ago
- Pipeline Parallelism for PyTorch☆732Updated 4 months ago
- Cataloging released Triton kernels.☆147Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆946Updated this week
- Ring attention implementation with flash attention☆634Updated 3 weeks ago