☆28Dec 2, 2024Updated last year
Alternatives and similar repositories for LLMCBench
Users that are interested in LLMCBench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆20Aug 30, 2024Updated last year
- ☆11Jan 10, 2025Updated last year
- provide some new architecture, channel pruning and quantization methods for yolov5☆30Oct 13, 2025Updated 6 months ago
- codebase for "MELTing Point: Mobile Evaluation of Language Transformers"☆19Jul 19, 2024Updated last year
- ☆20Apr 27, 2021Updated 5 years ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆43Aug 14, 2024Updated last year
- High Performance FP8 GEMM Kernels for SM89 and later GPUs.☆21Jan 24, 2025Updated last year
- A simple pytorch implementation of Differentiable Architecture Search (DARTS)☆22Aug 27, 2019Updated 6 years ago
- PyTorch code for our paper "Progressive Binarization with Semi-Structured Pruning for LLMs"☆13Mar 11, 2026Updated last month
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆49Nov 5, 2024Updated last year
- ☆26Dec 10, 2020Updated 5 years ago
- (DAC2019 / TCAD2020) Faster Region-based Hotspot Detection.☆24Aug 12, 2023Updated 2 years ago
- D^2-MoE: Delta Decompression for MoE-based LLMs Compression☆82Mar 25, 2025Updated last year
- ☆23Nov 26, 2024Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆30Jun 30, 2025Updated 10 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆26Oct 3, 2023Updated 2 years ago
- Code for "Seeking Neural Nuggets: Knowledge Transfer in Large Language Models from a Parametric Perspective"☆33May 9, 2024Updated last year
- Home page for Microsoft Phi-Ground tech-report☆23Sep 8, 2025Updated 7 months ago
- Made with LaTex. NENU's recommendation letter template.☆13May 26, 2024Updated last year
- ☆77Dec 16, 2025Updated 4 months ago
- Code Repository of Evaluating Quantized Large Language Models☆134Sep 8, 2024Updated last year
- ☆15Mar 20, 2023Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Transforming Video Diffusion with Temporal Sparse Attention☆48Apr 8, 2026Updated 3 weeks ago
- ☆11Sep 20, 2024Updated last year
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆711Apr 1, 2026Updated last month
- Adversarial Attack for Pre-trained Code Models☆10Jul 19, 2022Updated 3 years ago
- 病理图像分割,Semantic Segmentation of Pathological Images☆11Oct 3, 2023Updated 2 years ago
- (NeurIPS 2024) BiDM: Pushing the Limit of Quantization for Diffusion Models☆22Nov 20, 2024Updated last year
- ☆11Sep 27, 2018Updated 7 years ago
- An implementation of LLMzip using GPT-2☆14Aug 7, 2023Updated 2 years ago
- [DATE'2025, TCAD'2025] Terafly : A Multi-Node FPGA Based Accelerator Design for Efficient Cooperative Inference in LLMs☆36Nov 13, 2025Updated 5 months ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆85Jun 26, 2024Updated last year
- 2019年全国大学生电子设计大赛G题双路语音调频接收机的FPGA全实现☆17Apr 15, 2020Updated 6 years ago
- Official Pytorch Implementation of "Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity"☆81Jul 7, 2025Updated 9 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆68Mar 27, 2025Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆75Jan 6, 2024Updated 2 years ago
- CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution☆17Jun 25, 2023Updated 2 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆21Nov 15, 2020Updated 5 years ago