☆79Nov 26, 2024Updated last year
Alternatives and similar repositories for AutoAWQ_kernels
Users that are interested in AutoAWQ_kernels are comparing it to the libraries listed below
Sorting:
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆43Oct 20, 2023Updated 2 years ago
- Easy and Efficient Quantization for Transformers☆206Jan 28, 2026Updated last month
- ☆168Feb 5, 2026Updated 3 weeks ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆147May 10, 2025Updated 9 months ago
- Awesome code, projects, books, etc. related to CUDA☆31Feb 3, 2026Updated last month
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,315May 11, 2025Updated 9 months ago
- DeeperGEMM: crazy optimized version☆74May 5, 2025Updated 9 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated last week
- ☆13Jan 7, 2025Updated last year
- 🎉My Collections of CUDA Kernels~☆11Jun 25, 2024Updated last year
- Website for CSE 234, Winter 2025☆13Mar 24, 2025Updated 11 months ago
- PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation☆32Nov 16, 2024Updated last year
- FlashRNN - Fast RNN Kernels with I/O Awareness☆175Oct 20, 2025Updated 4 months ago
- Yet another coding assistant powered by LLM.☆16Sep 11, 2024Updated last year
- (MacOS Support) OpenAI compatible http server for Spark-TTS☆15May 1, 2025Updated 10 months ago
- Jig for the Open-Source IR Replicability Challenge (OSIRRC)☆13Dec 8, 2022Updated 3 years ago
- extensible collectives library in triton☆95Mar 31, 2025Updated 11 months ago
- [HPCA 2026] A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆80Dec 18, 2025Updated 2 months ago
- Implement Flash Attention using Cute.☆101Dec 17, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Feb 9, 2026Updated 3 weeks ago
- A proxy that hosts multiple single-model runners such as LLama.cpp and vLLM☆12May 30, 2025Updated 9 months ago
- A Triton-only attention backend for vLLM☆24Feb 11, 2026Updated 2 weeks ago
- JAX bindings for the flash-attention3 kernels☆21Jan 2, 2026Updated 2 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- ☆53Feb 24, 2026Updated last week
- ☆261Jul 11, 2024Updated last year
- ☆118May 19, 2025Updated 9 months ago
- graspologic-native is a library of rust components to add additional capability to graspologic a python library for intelligently buildin…☆18Apr 2, 2025Updated 11 months ago
- A quantization algorithm for LLM☆148Jun 21, 2024Updated last year
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 6 months ago
- 基于 CUDA Driver API 的 cuda 运行时环境☆15Jul 30, 2025Updated 7 months ago
- Speed up image preprocess with cuda when handle image or tensorrt inference☆85Nov 5, 2025Updated 3 months ago
- A CUDA kernel for NHWC GroupNorm for PyTorch☆23Nov 15, 2024Updated last year
- ☆44Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated last year
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆48May 10, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆816Mar 6, 2025Updated 11 months ago