hkproj / quantization-notes
Notes on quantization in neural networks
☆77Updated last year
Alternatives and similar repositories for quantization-notes:
Users that are interested in quantization-notes are comparing it to the libraries listed below
- ☆136Updated 2 months ago
- Mixed precision training from scratch with Tensors and CUDA☆21Updated 10 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆167Updated this week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆153Updated this week
- Distributed training (multi-node) of a Transformer model☆59Updated 11 months ago
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆47Updated 10 months ago
- GPU Kernels☆155Updated this week
- Prune transformer layers☆68Updated 9 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 6 months ago
- ☆151Updated last year
- 100 days of building GPU kernels!☆311Updated this week
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆104Updated 5 months ago
- Reference implementation of Mistral AI 7B v0.1 model.☆28Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆99Updated last year
- Fast low-bit matmul kernels in Triton☆267Updated this week
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 10 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆277Updated last month
- Code for studying the super weight in LLM☆94Updated 3 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆306Updated 2 weeks ago
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆63Updated last year
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆234Updated last month
- Google TPU optimizations for transformers models☆104Updated 2 months ago
- LoRA and DoRA from Scratch Implementations☆198Updated last year
- BERT explained from scratch☆13Updated last year
- Cataloging released Triton kernels.☆208Updated 2 months ago
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆179Updated 2 months ago
- ☆158Updated last month
- Code repo for the paper "SpinQuant LLM quantization with learned rotations"☆236Updated last month
- A set of scripts and notebooks on LLM finetunning and dataset creation☆105Updated 5 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆87Updated last year