hongsunjang / pipe-bdLinks
[DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation
☆11Updated 2 years ago
Alternatives and similar repositories for pipe-bd
Users that are interested in pipe-bd are comparing it to the libraries listed below
Sorting:
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆49Updated 2 months ago
- It's All In the Teacher: Zero-Shot Quantization Brought Closer to the Teacher [CVPR 2022 Oral]☆29Updated 3 years ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆33Updated 3 years ago
- ☆21Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆116Updated 3 months ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Updated 9 months ago
- ☆55Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆66Updated last year
- Experimental deep learning framework written in Rust☆15Updated 2 years ago
- ☆45Updated last year
- ☆56Updated last year
- ☆73Updated 4 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆47Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆20Updated last year
- ☆15Updated last year
- ☆27Updated 10 months ago
- ☆15Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated last year
- LLM Inference with Microscaling Format☆31Updated 11 months ago
- ☆76Updated last year
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆38Updated last year
- ☆11Updated 5 months ago
- Residual vector quantization for KV cache compression in large language model☆10Updated 11 months ago
- ☆25Updated 2 years ago
- ☆10Updated last year
- ☆61Updated 2 years ago
- ☆22Updated last week
- Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs☆21Updated 10 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆23Updated 7 months ago