hkproj / quantization-notesLinks
Notes on quantization in neural networks
☆83Updated last year
Alternatives and similar repositories for quantization-notes
Users that are interested in quantization-notes are comparing it to the libraries listed below
Sorting:
- ☆168Updated 5 months ago
- making the official triton tutorials actually comprehensible☆34Updated 2 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆184Updated last week
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆104Updated last year
- ☆157Updated last year
- GPU Kernels☆178Updated last month
- Distributed training (multi-node) of a Transformer model☆68Updated last year
- LoRA and DoRA from Scratch Implementations☆203Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆46Updated last year
- ☆35Updated last week
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆185Updated 4 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆181Updated 3 weeks ago
- LLaMA 2 implemented from scratch in PyTorch☆328Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆67Updated last year
- Mixed precision training from scratch with Tensors and CUDA☆23Updated last year
- 100 days of building GPU kernels!☆430Updated last month
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆91Updated last year
- Prune transformer layers☆69Updated last year
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆301Updated last month
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 8 months ago
- Notebooks for fine tuning pali gemma☆107Updated last month
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"☆64Updated this week
- All Homeworks for TinyML and Efficient Deep Learning Computing 6.5940 • Fall • 2023 • https://efficientml.ai☆170Updated last year
- Fast low-bit matmul kernels in Triton☆303Updated last week
- An extension of the nanoGPT repository for training small MOE models.☆147Updated 2 months ago
- Simple Adaptation of BitNet☆32Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆357Updated 2 months ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆67Updated 2 months ago
- ☆39Updated 3 weeks ago
- ☆188Updated 3 months ago