GATECH-EIC / Edge-LLMLinks
[DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
☆56Updated 11 months ago
Alternatives and similar repositories for Edge-LLM
Users that are interested in Edge-LLM are comparing it to the libraries listed below
Sorting:
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆93Updated 9 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆19Updated 3 months ago
- ☆98Updated last year
- Code release for AdapMoE accepted by ICCAD 2024☆25Updated last month
- Code Repository of Evaluating Quantized Large Language Models☆123Updated 8 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆18Updated 5 months ago
- LLM Inference with Microscaling Format☆23Updated 6 months ago
- ☆22Updated 6 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆47Updated 2 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆14Updated 11 months ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆90Updated last year
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆54Updated 2 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆62Updated last year
- DeiT implementation for Q-ViT☆25Updated last month
- ViTALiTy (HPCA'23) Code Repository☆22Updated 2 years ago
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆31Updated 9 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆161Updated 8 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆61Updated 2 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆16Updated 5 months ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆107Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆107Updated last month
- ☆59Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- ☆27Updated this week
- Lab 5 project of MIT-6.5940, deploying LLaMA2-7B-chat on one's laptop with TinyChatEngine.☆16Updated last year
- ☆57Updated last month
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆26Updated 11 months ago
- ArkVale: Efficient Generative LLM Inference with Recallable Key-Value Eviction (NIPS'24)☆40Updated 5 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆39Updated 11 months ago
- ☆41Updated 5 months ago