GATECH-EIC / Edge-LLMLinks
[DAC 2024] EDGE-LLM: Enabling Efficient Large Language Model Adaptation on Edge Devices via Layerwise Unified Compression and Adaptive Layer Tuning and Voting
☆78Updated last year
Alternatives and similar repositories for Edge-LLM
Users that are interested in Edge-LLM are comparing it to the libraries listed below
Sorting:
- Code release for AdapMoE accepted by ICCAD 2024☆35Updated 9 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆26Updated 11 months ago
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆122Updated last year
- ☆113Updated 2 years ago
- Code Repository of Evaluating Quantized Large Language Models☆136Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆24Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆126Updated 2 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆68Updated last year
- ☆75Updated last month
- ☆35Updated last month
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆101Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆123Updated 6 months ago
- SpInfer: Leveraging Low-Level Sparsity for Efficient Large Language Model Inference on GPUs☆61Updated 10 months ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆49Updated 3 years ago
- LLM Inference with Microscaling Format☆34Updated last year
- ☆54Updated last year
- ☆27Updated last year
- The official implementation of the DAC 2024 paper GQA-LUT☆20Updated last year
- DeiT implementation for Q-ViT☆25Updated 9 months ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆70Updated 9 months ago
- ☆16Updated last year
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆22Updated 2 months ago
- Curated collection of papers in MoE model inference☆339Updated 3 months ago
- ☆25Updated last year
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Updated 2 years ago
- ☆84Updated last year
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆18Updated last year
- a curated list of high-quality papers on resource-efficient LLMs 🌱☆155Updated 10 months ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Updated 11 months ago
- Torch2Chip (MLSys, 2024)☆55Updated 10 months ago