HectorHHZ / Sparse_Matrix_TuningLinks
Github repo for ICLR-2025 paper, Fine-tuning Large Language Models with Sparse Matrices
☆21Updated 6 months ago
Alternatives and similar repositories for Sparse_Matrix_Tuning
Users that are interested in Sparse_Matrix_Tuning are comparing it to the libraries listed below
Sorting:
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 7 months ago
- Tile-based language built for AI computation across all scales☆74Updated this week
- ☆64Updated 6 months ago
- ☆83Updated 9 months ago
- ☆39Updated 3 months ago
- Implement Flash Attention using Cute.☆96Updated 10 months ago
- ☆50Updated 5 months ago
- A GPU-optimized system for efficient long-context LLMs decoding with low-bit KV cache.☆61Updated 2 weeks ago
- ☆19Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆85Updated last month
- DeeperGEMM: crazy optimized version☆72Updated 6 months ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, Achieve Peak⚡️ Performance.☆124Updated 6 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆78Updated last year
- DLSlime: Flexible & Efficient Heterogeneous Transfer Toolkit☆75Updated this week
- Quantized Attention on GPU☆44Updated 11 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆43Updated 10 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆85Updated 4 months ago
- A practical way of learning Swizzle☆31Updated 9 months ago
- Multi-Level Triton Runner supporting Python, IR, PTX, and cubin.☆76Updated last week
- Triton adapter for Ascend. Mirror of https://gitee.com/ascend/triton-ascend☆81Updated last week
- llama INT4 cuda inference with AWQ☆55Updated 9 months ago
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆53Updated last year
- ☆120Updated 2 months ago
- ☆158Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆20Updated 9 months ago
- ☆58Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Updated last month
- ☆36Updated last year
- Optimize GEMM with tensorcore step by step☆32Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 4 months ago