InternLM / turbomind
☆27Updated this week
Related projects ⓘ
Alternatives and complementary repositories for turbomind
- Odysseus: Playground of LLM Sequence Parallelism☆55Updated 4 months ago
- GPTQ inference TVM kernel☆35Updated 6 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆98Updated 2 months ago
- Decoding Attention is specially optimized for multi head attention (MHA) using CUDA core for the decoding stage of LLM inference.☆23Updated last week
- ☆46Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆53Updated last week
- Quantized Attention on GPU☆29Updated last week
- ☆13Updated 7 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆52Updated 3 months ago
- Official PyTorch implementation of FlatQuant: Flatness Matters for LLM Quantization☆59Updated this week
- [WIP] Context parallel attention that works with torch.compile☆20Updated this week
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated 10 months ago
- ☆79Updated 2 months ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆20Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆15Updated 5 months ago
- Materials for learning SGLang☆86Updated this week
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆79Updated 5 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆76Updated last month
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆85Updated 8 months ago
- An algorithm for static activation quantization of LLMs☆68Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆28Updated 2 months ago
- ☆95Updated last month
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆32Updated 3 months ago
- llama INT4 cuda inference with AWQ☆47Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- Simple and fast low-bit matmul kernels in CUDA / Triton☆140Updated this week
- Official implementation of the ICLR 2024 paper AffineQuant☆21Updated 7 months ago
- ☆55Updated 5 months ago