gpu-mode / lecture2Links
Obsolete version of CUDA-mode repo -- use cuda-mode/lectures instead
☆27Updated last year
Alternatives and similar repositories for lecture2
Users that are interested in lecture2 are comparing it to the libraries listed below
Sorting:
- ☆178Updated last year
- ☆224Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last month
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 7 months ago
- ☆233Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆225Updated 10 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆326Updated 3 months ago
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆547Updated 4 months ago
- Alex Krizhevsky's original code from Google Code☆198Updated 9 years ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆202Updated 2 years ago
- coding CUDA everyday!☆72Updated last month
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆453Updated 10 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆331Updated 2 months ago
- CUDA tutorials for Maths & ML tutorials with examples, covers multi-gpus, fused attention, winograd convolution, reinforcement learning.☆206Updated 7 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- My personal site☆80Updated last week
- GPU Kernels☆218Updated 8 months ago
- Efficient LLM Inference over Long Sequences☆393Updated 6 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆117Updated 7 months ago
- Learn CUDA with PyTorch☆176Updated 3 weeks ago
- Cataloging released Triton kernels.☆287Updated 4 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆247Updated 8 months ago
- Simple MPI implementation for prototyping or learning☆300Updated 5 months ago
- Minimal example scripts of the Hugging Face Trainer, focused on staying under 150 lines☆196Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆352Updated 8 months ago
- Distributed training (multi-node) of a Transformer model☆91Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆130Updated last year
- ☆218Updated 11 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Google TPU optimizations for transformers models☆132Updated 3 weeks ago