lutnn / blink-mmLinks
☆15Updated 2 years ago
Alternatives and similar repositories for blink-mm
Users that are interested in blink-mm are comparing it to the libraries listed below
Sorting:
- ☆150Updated last year
- ☆51Updated last year
- PIM-DL: Expanding the Applicability of Commodity DRAM-PIMs for Deep Learning via Algorithm-System Co-Optimization☆33Updated last year
- LLM Inference analyzer for different hardware platforms☆87Updated last month
- ☆68Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆218Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆57Updated 5 months ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆89Updated 2 years ago
- ☆39Updated 2 years ago
- ☆38Updated last year
- PyTorch library for cost-effective, fast and easy serving of MoE models.☆224Updated last month
- FlashSparse significantly reduces the computation redundancy for unstructured sparsity (for SpMM and SDDMM) on Tensor Cores through a Swa…☆29Updated last month
- Explore Inter-layer Expert Affinity in MoE Model Inference☆13Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆43Updated 8 months ago
- Microsoft Collective Communication Library☆67Updated 9 months ago
- ☆181Updated last year
- ☆54Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 8 months ago
- ☆81Updated 2 years ago
- A lightweight design for computation-communication overlap.☆160Updated last week
- ☆25Updated 2 years ago
- ☆50Updated 2 months ago
- A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores☆53Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆47Updated 3 weeks ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆135Updated last month
- InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)☆152Updated last year
- Compiler for Dynamic Neural Networks☆46Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆68Updated 2 months ago
- ☆19Updated 11 months ago
- GPU TopK Benchmark☆15Updated 8 months ago