☆162Sep 15, 2023Updated 2 years ago
Alternatives and similar repositories for INT8-Flash-Attention-FMHA-Quantization
Users that are interested in INT8-Flash-Attention-FMHA-Quantization are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The only known (by 2022) open-source, easy-to-understand basic algorithm implementations in TD-CEM. (Please star and fork this project if…☆15Mar 1, 2022Updated 4 years ago
- Overlapping Schwarz Domain Decomposition Finite Element Algorithm in both Matlab and serial/parallel C++☆18Mar 1, 2022Updated 4 years ago
- Forward and backward Attention DNN operators implementationed by LibTorch, cuDNN, and Eigen.☆31Jun 6, 2023Updated 2 years ago
- ☆12Mar 4, 2022Updated 4 years ago
- CUDA 8-bit Tensor Core Matrix Multiplication based on m16n16k16 WMMA API☆36Sep 15, 2023Updated 2 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- This repository contains integer operators on GPUs for PyTorch.☆236Sep 29, 2023Updated 2 years ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆97Feb 20, 2026Updated last month
- ☆87Jan 23, 2025Updated last year
- Reorder-based post-training quantization for large language model☆199May 17, 2023Updated 2 years ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆113Sep 10, 2024Updated last year
- Official Implementation of SEA: Sparse Linear Attention with Estimated Attention Mask (ICLR 2024)☆12Jun 20, 2025Updated 9 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,632Jul 12, 2024Updated last year
- A Easy-to-understand TensorOp Matmul Tutorial☆423Mar 5, 2026Updated last month
- GPTQ inference TVM kernel☆40Apr 25, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,269Updated this week
- High performance pytorch modules☆18Jan 14, 2023Updated 3 years ago
- Prototype routines for GPU quantization written using PyTorch.☆21Updated this week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,287Mar 27, 2024Updated 2 years ago
- ☆44Nov 1, 2025Updated 5 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Mar 22, 2023Updated 3 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆397Feb 24, 2024Updated 2 years ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- ☆110Mar 12, 2026Updated last month
- Repository for CPU Kernel Generation for LLM Inference☆28Jul 13, 2023Updated 2 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,585Jan 28, 2026Updated 2 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆261Aug 9, 2025Updated 8 months ago
- Benchmark PyTorch Custom Operators☆14Jul 6, 2023Updated 2 years ago
- An extention of TVMScript to write simple and high performance GPU kernels with tensorcore.☆51Jul 23, 2024Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆45Feb 27, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Jul 21, 2023Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆220Feb 13, 2023Updated 3 years ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,051Sep 4, 2024Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆226Aug 1, 2024Updated last year
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆22Apr 9, 2026Updated last week
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆324Mar 4, 2025Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆240Sep 24, 2023Updated 2 years ago