lwy2020 / MicroMixLinks
MicroMix: Efficient Mixed-Precision Quantization with Microscaling Formats for Large Language Models
☆28Updated 2 weeks ago
Alternatives and similar repositories for MicroMix
Users that are interested in MicroMix are comparing it to the libraries listed below
Sorting:
- ☆145Updated last month
- ☆26Updated last year
- Summary of some awesome work for optimizing LLM inference☆173Updated 2 months ago
- Curated collection of papers in MoE model inference☆341Updated 3 months ago
- ☆12Updated last year
- ☆31Updated 10 months ago
- Large Language Model (LLM) Serving Paper and Resource List☆24Updated 8 months ago
- Examples of CUDA implementations by Cutlass CuTe☆270Updated 7 months ago
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆18Updated 2 years ago
- GEMM by WMMA (tensor core)☆14Updated 3 years ago
- A direct convolution library targeting ARM multi-core CPUs.☆12Updated last year
- 📚200+ Tensor/CUDA Cores Kernels, ⚡️flash-attn-mma, ⚡️hgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 🎉🎉).☆63Updated 9 months ago
- ☆161Updated 3 months ago
- ☆224Updated 3 months ago
- [ICLR2025]: OSTQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitt…☆88Updated 10 months ago
- ☆45Updated last year
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆174Updated last year
- From Minimal GEMM to Everything☆104Updated last month
- ☆113Updated 2 years ago
- A Easy-to-understand TensorOp Matmul Tutorial☆404Updated this week
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆70Updated 9 months ago
- ☆70Updated last year
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆340Updated 7 months ago
- This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding co…☆281Updated 2 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆57Updated 2 years ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆80Updated last month
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆522Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆484Updated 2 weeks ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆617Updated last year
- Puzzles for learning Triton, play it with minimal environment configuration!☆624Updated last month