vdesai2014 / inference-optimization-blog-postLinks
☆88Updated last year
Alternatives and similar repositories for inference-optimization-blog-post
Users that are interested in inference-optimization-blog-post are comparing it to the libraries listed below
Sorting:
- ☆162Updated last year
- ring-attention experiments☆149Updated 10 months ago
- Load compute kernels from the Hub☆244Updated this week
- Solve puzzles. Learn CUDA.☆64Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆137Updated last year
- ☆87Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆260Updated 3 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆128Updated 8 months ago
- seqax = sequence modeling + JAX☆166Updated last month
- JAX bindings for Flash Attention v2☆90Updated 3 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆568Updated last week
- Experiment of using Tangent to autodiff triton☆80Updated last year
- ☆232Updated this week
- ☆118Updated last year
- Normalized Transformer (nGPT)☆186Updated 9 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆190Updated 2 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆130Updated last year
- Fast low-bit matmul kernels in Triton☆349Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 4 months ago
- Cataloging released Triton kernels.☆252Updated 7 months ago
- Learn CUDA with PyTorch☆35Updated last month
- Implementation of Diffusion Transformer (DiT) in JAX☆291Updated last year
- ☆211Updated 6 months ago
- PyTorch Single Controller☆361Updated last week
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆383Updated last week
- ☆32Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- Learning about CUDA by writing PTX code.☆134Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆208Updated last week