UDC-GAC / venomView external linksLinks
A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores
☆57Nov 24, 2023Updated 2 years ago
Alternatives and similar repositories for venom
Users that are interested in venom are comparing it to the libraries listed below
Sorting:
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆61Mar 25, 2025Updated 10 months ago
- ☆32Aug 24, 2022Updated 3 years ago
- ☆20Sep 28, 2024Updated last year
- ☆164Jul 22, 2024Updated last year
- Source code of the SC '23 paper: "DASP: Specific Dense Matrix Multiply-Accumulate Units Accelerated General Sparse Matrix-Vector Multipli…☆27Jun 18, 2024Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆234Sep 24, 2023Updated 2 years ago
- Official PyTorch implementation of CD-MOE☆12Mar 29, 2025Updated 10 months ago
- Use tensor core to calculate back-to-back HGEMM (half-precision general matrix multiplication) with MMA PTX instruction.☆13Nov 3, 2023Updated 2 years ago
- Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging"☆12Oct 14, 2025Updated 4 months ago
- CUDA project for uni subject☆26Oct 26, 2020Updated 5 years ago
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆18Apr 16, 2024Updated last year
- A tiny FP8 multiplication unit written in Verilog. TinyTapeout 2 submission.☆14Nov 23, 2022Updated 3 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆91Nov 23, 2022Updated 3 years ago
- This is an official GitHub repository for the paper, "Towards timeout-less transport in commodity datacenter networks.".☆16Oct 12, 2021Updated 4 years ago
- Escoin: Efficient Sparse Convolutional Neural Network Inference on GPUs☆16Feb 28, 2019Updated 6 years ago
- Sparse, differentiable numerics for PyTorch☆17Updated this week
- Cohort Project☆19Oct 23, 2025Updated 3 months ago
- New batched algorithm for sparse matrix-matrix multiplication (SpMM)☆16May 7, 2019Updated 6 years ago
- Mirror of http://gitlab.hpcrl.cse.ohio-state.edu/chong/ppopp19_ae, refactoring for understanding☆15Oct 20, 2021Updated 4 years ago
- ☆19Dec 3, 2019Updated 6 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆16Jan 16, 2025Updated last year
- XML representation of the x86 instruction set☆29Jan 17, 2026Updated 3 weeks ago
- ☆14Sep 27, 2021Updated 4 years ago
- study of Ampere' Sparse Matmul☆18Jan 10, 2021Updated 5 years ago
- Heterogeneous Accelerated Computed Cluster (HACC) Resources Page☆22Oct 7, 2025Updated 4 months ago
- [Mlsys'22] Understanding gnn computational graph: A coordinated computation, io, and memory perspective☆22Sep 11, 2023Updated 2 years ago
- A simple implementation of [Mamba: Linear-Time Sequence Modeling with Selective State Spaces](https://arxiv.org/abs/2312.00752)☆22Jan 22, 2024Updated 2 years ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆812Mar 6, 2025Updated 11 months ago
- Source code for the paper "LongGenBench: Long-context Generation Benchmark"☆24Oct 8, 2024Updated last year
- ☆105Feb 25, 2025Updated 11 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆142Mar 31, 2023Updated 2 years ago
- ☆26Feb 17, 2025Updated 11 months ago
- This is a tuned sparse matrix dense vector multiplication(SpMV) library☆22Mar 21, 2016Updated 9 years ago
- ☆22Feb 18, 2025Updated 11 months ago
- ☆31Apr 2, 2025Updated 10 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆23Nov 11, 2025Updated 3 months ago
- ☆158Feb 15, 2025Updated 11 months ago
- A novel spatial accelerator for horizontal diffusion weather stencil computation, as described in ICS 2023 paper by Singh et al. (https:/…☆22Jul 27, 2023Updated 2 years ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆27Apr 21, 2025Updated 9 months ago