hkproj / quantization-notesLinks
Notes on quantization in neural networks
☆117Updated 2 years ago
Alternatives and similar repositories for quantization-notes
Users that are interested in quantization-notes are comparing it to the libraries listed below
Sorting:
- ☆234Updated last year
- 100 days of building GPU kernels!☆568Updated 9 months ago
- GPU Kernels☆218Updated 9 months ago
- making the official triton tutorials actually comprehensible☆104Updated 5 months ago
- ☆178Updated 2 years ago
- ☆46Updated 8 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆457Updated 10 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆197Updated 8 months ago
- Distributed training (multi-node) of a Transformer model☆93Updated last year
- LLaMA 2 implemented from scratch in PyTorch☆366Updated 2 years ago
- Slides, notes, and materials for the workshop☆339Updated last year
- ☆412Updated 9 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆49Updated last year
- coding CUDA everyday!☆73Updated last week
- An extension of the nanoGPT repository for training small MOE models.☆233Updated 10 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆249Updated 9 months ago
- Learn CUDA with PyTorch☆193Updated this week
- Recreating PyTorch from scratch (C/C++, CUDA, NCCL and Python, with multi-GPU support and automatic differentiation!)☆162Updated 2 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆122Updated 2 years ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆336Updated 2 years ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆202Updated 2 years ago
- Apply GPU in ML and DL☆56Updated 4 months ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆74Updated 2 years ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆334Updated 3 months ago
- ☆89Updated 2 months ago
- Best practices & guides on how to write distributed pytorch training code☆575Updated 3 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆356Updated 2 weeks ago
- This repository is a curated collection of resources, tutorials, and practical examples designed to guide you through the journey of mast…☆435Updated 11 months ago
- Mixed precision training from scratch with Tensors and CUDA☆28Updated last year