mit-han-lab / pruning-sparsity-publicationsLinks
☆24Updated 2 years ago
Alternatives and similar repositories for pruning-sparsity-publications
Users that are interested in pruning-sparsity-publications are comparing it to the libraries listed below
Sorting:
- ☆172Updated 2 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆113Updated 2 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆126Updated 2 years ago
- ☆100Updated last year
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆134Updated 5 months ago
- [DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive La…☆65Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆130Updated 11 months ago
- Awesome list for LLM quantization☆282Updated last week
- A collection of research papers on low-precision training methods☆33Updated 3 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆122Updated last year
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆218Updated last year
- A repository dedicated to evaluating the performance of quantizied LLaMA3 using various quantization methods..☆193Updated 7 months ago
- ☆66Updated last month
- Survey Paper List - Efficient LLM and Foundation Models☆255Updated 11 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆22Updated 6 months ago
- ☆158Updated 2 years ago
- ☆78Updated 4 months ago
- A minimal implementation of vllm.☆52Updated last year
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆18Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆57Updated 5 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆42Updated last year
- This is a list of awesome edgeAI inference related papers.☆97Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆64Updated last year
- ☆72Updated 10 months ago
- ☆206Updated 3 years ago
- A curated list of early exiting (LLM, CV, NLP, etc)☆58Updated last year
- ☆81Updated 7 months ago
- This is a collection of our research on efficient AI, covering hardware-aware NAS and model compression.☆83Updated 10 months ago
- LLM Inference with Deep Learning Accelerator.☆50Updated 7 months ago
- Efficient LLM Inference Acceleration using Prompting☆50Updated 10 months ago