horseee / LLaMA-PruningLinks
Structural Pruning for LLaMA
☆54Updated 2 years ago
Alternatives and similar repositories for LLaMA-Pruning
Users that are interested in LLaMA-Pruning are comparing it to the libraries listed below
Sorting:
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆99Updated last year
- PB-LLM: Partially Binarized Large Language Models☆156Updated last year
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- SparseGPT + GPTQ Compression of LLMs like LLaMa, OPT, Pythia☆40Updated 2 years ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆124Updated 9 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆94Updated 11 months ago
- QuIP quantization☆59Updated last year
- ☆127Updated last year
- Reorder-based post-training quantization for large language model☆194Updated 2 years ago
- Unofficial implementations of block/layer-wise pruning methods for LLMs.☆72Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- ☆202Updated 10 months ago
- ☆85Updated 9 months ago
- Are gradient information useful for pruning of LLMs?☆47Updated 2 months ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- Low-bit optimizers for PyTorch☆132Updated 2 years ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- ☆156Updated 2 years ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆80Updated last year
- ☆69Updated last year
- Token Omission Via Attention☆127Updated last year
- ☆52Updated 11 months ago
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆22Updated last year
- GPTQLoRA: Efficient Finetuning of Quantized LLMs with GPTQ☆102Updated 2 years ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆51Updated 6 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆174Updated last year