mit-han-lab / tinychat-tutorial
โ62Updated 5 months ago
Alternatives and similar repositories for tinychat-tutorial:
Users that are interested in tinychat-tutorial are comparing it to the libraries listed below
- Implement Flash Attention using Cute.โ74Updated 3 months ago
- ๐FFPA(Split-D): Yet another Faster Flash Attention with O(1) GPU SRAM complexity large headdim, 1.8x~3xโ๐ faster than SDPA EA.โ164Updated last week
- โ78Updated 2 weeks ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.โ103Updated 9 months ago
- llama INT4 cuda inference with AWQโ54Updated 2 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer Lโฆโ47Updated 2 years ago
- Official PyTorch implementation of "FlatQuant: Flatness Matters for LLM Quantization"โ116Updated 2 weeks ago
- โ68Updated 2 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.โ108Updated 10 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformerโ91Updated 2 weeks ago
- โ161Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.โ35Updated last week
- DeeperGEMM: crazy optimized versionโ65Updated last week
- โ142Updated 2 years ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUsโ41Updated 2 weeks ago
- This repository contains integer operators on GPUs for PyTorch.โ200Updated last year
- โ52Updated 2 months ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.โ107Updated 4 months ago
- play gemm with tvmโ90Updated last year
- โ91Updated 7 months ago
- Quantized Attention on GPUโ45Updated 4 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.โ32Updated 2 weeks ago
- Standalone Flash Attention v2 kernel without libtorch dependencyโ108Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.โ80Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.โ35Updated last month
- Benchmark code for the "Online normalizer calculation for softmax" paperโ90Updated 6 years ago
- ไฝฟ็จ CUDA C++ ๅฎ็ฐ็ llama ๆจกๅๆจ็ๆกๆถโ49Updated 5 months ago
- GPTQ inference TVM kernelโ38Updated 11 months ago
- Odysseus: Playground of LLM Sequence Parallelismโ68Updated 9 months ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel โฆโ180Updated 2 months ago