GATECH-EIC / Edge-LLMLinks
[DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
☆72Updated last year
Alternatives and similar repositories for Edge-LLM
Users that are interested in Edge-LLM are comparing it to the libraries listed below
Sorting:
- Code release for AdapMoE accepted by ICCAD 2024☆34Updated 7 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆24Updated 9 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆116Updated last year
- ☆113Updated 2 years ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆97Updated last year
- Code Repository of Evaluating Quantized Large Language Models☆137Updated last year
- ☆33Updated this week
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆24Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆121Updated 5 months ago
- Curated collection of papers in MoE model inference☆314Updated 2 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Updated 3 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Updated 2 years ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆68Updated 7 months ago
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆124Updated 2 years ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Updated last year
- ☆212Updated last month
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆68Updated last year
- ☆26Updated last year
- LLM Inference with Microscaling Format☆33Updated last year
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆59Updated 8 months ago
- An efficient spatial accelerator enabling hybrid sparse attention mechanisms for long sequences☆30Updated last year
- ☆82Updated last year
- DeiT implementation for Q-ViT☆25Updated 8 months ago
- ☆26Updated last year
- ☆51Updated last year
- ☆25Updated last year
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- ViTALiTy (HPCA'23) Code Repository☆23Updated 2 years ago
- ☆15Updated last year
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated 2 years ago