abdelfattah-lab / shadow_llmLinks
☆11Updated last year
Alternatives and similar repositories for shadow_llm
Users that are interested in shadow_llm are comparing it to the libraries listed below
Sorting:
- LLM Inference with Microscaling Format☆34Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆24Updated last year
- ☆15Updated 10 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Updated last year
- ☆15Updated 3 years ago
- [ACL 2025] Squeezed Attention: Accelerating Long Prompt LLM Inference☆56Updated last year
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- This repo contains the code for studying the interplay between quantization and sparsity methods☆26Updated 11 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆68Updated last year
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆52Updated 5 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆22Updated 2 months ago
- SLiM: One-shot Quantized Sparse Plus Low-rank Approximation of LLMs (ICML 2025)☆32Updated 2 months ago
- ☆31Updated last year
- AdaSkip: Adaptive Sublayer Skipping for Accelerating Long-Context LLM Inference☆20Updated last year
- ☆58Updated last year
- ☆25Updated last year
- ☆84Updated last year
- ☆75Updated last month
- ☆40Updated last year
- Code release for AdapMoE accepted by ICCAD 2024☆35Updated 9 months ago
- ☆21Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Updated 6 months ago
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Updated last year
- Residual vector quantization for KV cache compression in large language model☆11Updated last year
- Torch2Chip (MLSys, 2024)☆55Updated 9 months ago
- ☆60Updated last year
- ☆16Updated last year
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆52Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Updated 3 years ago
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆24Updated 2 years ago