☆28Dec 2, 2024Updated last year
Alternatives and similar repositories for LLMCBench
Users that are interested in LLMCBench are comparing it to the libraries listed below
Sorting:
- ☆11Jan 10, 2025Updated last year
- provide some new architecture, channel pruning and quantization methods for yolov5☆30Oct 13, 2025Updated 5 months ago
- ☆20Apr 27, 2021Updated 4 years ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- A simple pytorch implementation of Differentiable Architecture Search (DARTS)☆22Aug 27, 2019Updated 6 years ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆62Mar 25, 2025Updated 11 months ago
- PyTorch code for our paper "Progressive Binarization with Semi-Structured Pruning for LLMs"☆13Mar 11, 2026Updated last week
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆49Nov 5, 2024Updated last year
- ☆26Dec 10, 2020Updated 5 years ago
- (DAC2019 / TCAD2020) Faster Region-based Hotspot Detection.☆24Aug 12, 2023Updated 2 years ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆73Mar 25, 2025Updated 11 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Oct 3, 2023Updated 2 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33May 9, 2024Updated last year
- Home page for Microsoft Phi-Ground tech-report☆23Sep 8, 2025Updated 6 months ago
- Made with LaTex. NENU's recommendation letter template.☆13May 26, 2024Updated last year
- ☆40Nov 22, 2025Updated 4 months ago
- Code Repository of Evaluating Quantized Large Language Models☆135Sep 8, 2024Updated last year
- ☆15Mar 20, 2023Updated 3 years ago
- Transforming Video Diffusion with Temporal Sparse Attention☆46Updated this week
- ☆11Sep 20, 2024Updated last year
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆691Mar 11, 2026Updated last week
- [DATE'2025, TCAD'2025] Terafly : A Multi-Node FPGA Based Accelerator Design for Efficient Cooperative Inference in LLMs☆30Nov 13, 2025Updated 4 months ago
- 病理图像分割,Semantic Segmentation of Pathological Images☆11Oct 3, 2023Updated 2 years ago
- Adversarial Attack for Pre-trained Code Models☆10Jul 19, 2022Updated 3 years ago
- (NeurIPS 2024) BiDM: Pushing the Limit of Quantization for Diffusion Models☆22Nov 20, 2024Updated last year
- Official repository for the paper Local Linear Attention: An Optimal Interpolation of Linear and Softmax Attention For Test-Time Regressi…☆23Oct 1, 2025Updated 5 months ago
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆83Jun 26, 2024Updated last year
- An implementation of LLMzip using GPT-2☆13Aug 7, 2023Updated 2 years ago
- A JPEG-LS plugin for the Python Pillow library☆16Dec 31, 2023Updated 2 years ago
- 2019年全国大学生电子设计大赛G题双路语音调频接收机的FPGA全实现☆18Apr 15, 2020Updated 5 years ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 8 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆67Mar 27, 2025Updated 11 months ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆70Jan 6, 2024Updated 2 years ago
- Cross-Domain Deep Code Search with Few-Shot Learning☆11Jul 5, 2023Updated 2 years ago
- CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution☆17Jun 25, 2023Updated 2 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆22Nov 15, 2020Updated 5 years ago
- ☆17Mar 8, 2025Updated last year
- ☆26Nov 16, 2025Updated 4 months ago