Quentin-Anthony / torch-profiling-tutorialLinks
☆541Updated 5 months ago
Alternatives and similar repositories for torch-profiling-tutorial
Users that are interested in torch-profiling-tutorial are comparing it to the libraries listed below
Sorting:
- Best practices & guides on how to write distributed pytorch training code☆571Updated 3 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆197Updated 8 months ago
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 6 months ago
- (WIP) A small but powerful, homemade PyTorch from scratch.☆672Updated last week
- ☆490Updated last year
- Complete solutions to the Programming Massively Parallel Processors Edition 4☆647Updated 7 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆457Updated 10 months ago
- UNet diffusion model in pure CUDA☆661Updated last year
- Learnings and programs related to CUDA☆431Updated 7 months ago
- Simple Transformer in Jax☆142Updated last year
- Puzzles for exploring transformers☆384Updated 2 years ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆827Updated last week
- ☆558Updated last year
- Simple MPI implementation for prototyping or learning☆300Updated 5 months ago
- Following Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- Implementation of Diffusion Transformer (DiT) in JAX☆306Updated last year
- Dion optimizer algorithm☆424Updated 2 weeks ago
- ☆289Updated last year
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆276Updated last year
- A practical guide to diffusion models, implemented from scratch.☆244Updated last month
- ☆178Updated 2 years ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆114Updated last month
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.