Qualcomm-AI-research / dynamic-sparsityLinks
☆15Updated 9 months ago
Alternatives and similar repositories for dynamic-sparsity
Users that are interested in dynamic-sparsity are comparing it to the libraries listed below
Sorting:
- ☆11Updated last year
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆20Updated last year
- ☆40Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 9 months ago
- A curated list for Efficient Large Language Models☆11Updated last year
- ☆15Updated last year
- Sparsity support for PyTorch☆38Updated 9 months ago
- LLM Inference with Microscaling Format☆34Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆104Updated 6 months ago
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆50Updated 5 months ago
- ☆39Updated 3 weeks ago
- Artifacts of EVT ASPLOS'24☆28Updated last year
- ☆31Updated last year
- Chameleon: Adaptive Code Optimization for Expedited Deep Neural Network Compilation☆27Updated 6 years ago
- Autocomp: AI-Driven Code Optimizer for Tensor Accelerators☆59Updated this week
- ☆15Updated 3 years ago
- ☆58Updated last year
- Optimize tensor program fast with Felix, a gradient descent autotuner.☆29Updated last year
- ☆164Updated last year
- ☆85Updated 11 months ago
- Torch2Chip (MLSys, 2024)☆55Updated 9 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆92Updated 3 months ago
- GPTQ inference TVM kernel☆41Updated last year
- Artifact evaluation for HPCA'24 paper Lightening-Transformer: A Dynamically-operated Optically-interconnected Photonic Transformer Accele…☆11Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Updated 6 months ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆55Updated last year
- ☆13Updated 2 years ago
- This is the implementation for paper: AdaTune: Adaptive Tensor Program CompilationMade Efficient (NeurIPS 2020).☆14Updated 4 years ago