Quentin-Anthony / torch-profiling-tutorialLinks
☆541Updated 6 months ago
Alternatives and similar repositories for torch-profiling-tutorial
Users that are interested in torch-profiling-tutorial are comparing it to the libraries listed below
Sorting:
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆197Updated 8 months ago
- Best practices & guides on how to write distributed pytorch training code☆575Updated 3 months ago
- Complete solutions to the Programming Massively Parallel Processors Edition 4☆655Updated 7 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆457Updated 10 months ago
- Dion optimizer algorithm☆424Updated 3 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- Simple Transformer in Jax☆142Updated last year
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆81Updated 8 months ago
- ☆492Updated last year
- (WIP) A small but powerful, homemade PyTorch from scratch.☆672Updated last week
- ☆289Updated last year
- Simple MPI implementation for prototyping or learning☆300Updated 6 months ago
- ☆562Updated last year
- A practical guide to diffusion models, implemented from scratch.☆244Updated last month
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆829Updated 6 months ago
- UNet diffusion model in pure CUDA☆661Updated last year
- Learnings and programs related to CUDA☆432Updated 7 months ago
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆276Updated last year
- MoE training for Me and You and maybe other people☆335Updated last month
- Following Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- ☆215Updated last year
- Puzzles for exploring transformers☆384Updated 2 years ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆155Updated 2 years ago
- ☆178Updated 2 years ago
- Implementation of Diffusion Transformer (DiT) in JAX☆306Updated last year
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆589Updated 4 months ago
- Open-source framework for the research and development of foundation models.☆752Updated this week
- Fast bare-bones BPE for modern tokenizer training☆175Updated 7 months ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆115Updated last month
- Solve puzzles. Learn CUDA.☆63Updated 2 years ago