ankan-ban / llama_cu_awqView external linksLinks
llama INT4 cuda inference with AWQ
☆54Jan 20, 2025Updated last year
Alternatives and similar repositories for llama_cu_awq
Users that are interested in llama_cu_awq are comparing it to the libraries listed below
Sorting:
- Inference Llama 2 in one file of pure Cuda☆17Aug 20, 2023Updated 2 years ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Aug 3, 2025Updated 6 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆11Jun 10, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Feb 27, 2025Updated 11 months ago
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆356Nov 20, 2025Updated 2 months ago
- Awesome code, projects, books, etc. related to CUDA☆30Feb 3, 2026Updated 2 weeks ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Sep 13, 2025Updated 5 months ago
- Inference deployment of the llama3☆11Apr 21, 2024Updated last year
- Use tensor core to calculate back-to-back HGEMM (half-precision general matrix multiplication) with MMA PTX instruction.☆13Nov 3, 2023Updated 2 years ago
- ☆130Dec 24, 2024Updated last year
- llama 2 Inference☆43Nov 4, 2023Updated 2 years ago
- ☆160Sep 15, 2023Updated 2 years ago
- ☆14Nov 3, 2025Updated 3 months ago
- JAX bindings for the flash-attention3 kernels☆20Jan 2, 2026Updated last month
- paper-read-notes☆13Sep 26, 2024Updated last year
- Implementation of a histogram equalization program using CUDA. Histogram equalization is a technique for adjusting image intensities to e…☆13Jan 3, 2021Updated 5 years ago
- ☆27Jan 8, 2024Updated 2 years ago
- A Easy-to-understand TensorOp Matmul Tutorial☆410Feb 11, 2026Updated last week
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆372Jul 10, 2025Updated 7 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 6 months ago
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆15Jun 1, 2023Updated 2 years ago
- ggml学习笔记,ggml是一个机器学习的推理框架☆18Mar 24, 2024Updated last year
- Applied AI experiments and examples for PyTorch☆317Aug 22, 2025Updated 5 months ago
- ☆65Apr 26, 2025Updated 9 months ago
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- ☆118May 19, 2025Updated 8 months ago
- ☆165Feb 5, 2026Updated last week
- YiRage (Yield Revolutionary AGile Engine) - Multi-Backend LLM Inference Optimization. Extends Mirage with comprehensive support for CUDA,…☆36Jan 28, 2026Updated 2 weeks ago
- 搜藏的希望的代码片段☆13Jun 6, 2023Updated 2 years ago
- This is a repo covers ai research papers pseudocodes☆17Jun 20, 2023Updated 2 years ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆37Sep 24, 2024Updated last year
- Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.☆72Sep 8, 2024Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆209Sep 21, 2024Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,018Sep 4, 2024Updated last year
- 使用mnn-llm对GOT-OCR2.0进行推理☆14Oct 2, 2024Updated last year