falcon-xu / early-exit-papers
A curated list of early exiting (LLM, CV, NLP, etc)
☆44Updated 7 months ago
Alternatives and similar repositories for early-exit-papers:
Users that are interested in early-exit-papers are comparing it to the libraries listed below
- ☆50Updated last year
- ☆99Updated last year
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆31Updated 3 years ago
- ☆21Updated 3 months ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆113Updated last year
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆44Updated last year
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆32Updated 2 years ago
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- A curated list of Early Exiting papers, benchmarks, and misc.☆108Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆58Updated last year
- ☆36Updated 9 months ago
- Create tiny ML systems for on-device learning.☆20Updated 3 years ago
- ☆19Updated last year
- ☆47Updated 3 months ago
- Awesome LLM pruning papers all-in-one repository with integrating all useful resources and insights.☆76Updated 3 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆45Updated 11 months ago
- ☆43Updated last year
- A Cluster-Wide Model Manager to Accelerate DNN Training via Automated Training Warmup☆34Updated 2 years ago
- Measuring and predicting on-device metrics (latency, power, etc.) of machine learning models☆66Updated last year
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆91Updated 8 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆98Updated 2 months ago
- ☆42Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆46Updated last year
- The official implementation of TinyTrain [ICML '24]☆21Updated 8 months ago
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆15Updated 3 months ago
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆53Updated 2 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆19Updated last year
- [NeurIPS 2022] A Fast Post-Training Pruning Framework for Transformers☆180Updated 2 years ago
- This is a list of awesome edgeAI inference related papers.☆95Updated last year