xichen-fy / Fira
Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?
☆81Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for Fira
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆59Updated 6 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆36Updated 3 weeks ago
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆71Updated 5 months ago
- Pytorch implementation of our paper accepted by ICML 2024 -- CaM: Cache Merging for Memory-efficient LLMs Inference☆25Updated 4 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆28Updated 5 months ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆94Updated last month
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆49Updated 3 weeks ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆36Updated 3 weeks ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆90Updated 2 months ago
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆101Updated last month
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆64Updated 5 months ago
- [ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.☆17Updated 8 months ago
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆32Updated 2 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆34Updated 8 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated 6 months ago
- [ACL 2024] Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models☆66Updated 5 months ago
- An algorithm for static activation quantization of LLMs☆68Updated this week
- Official implementation of ICML 2024 paper "ExCP: Extreme LLM Checkpoint Compression via Weight-Momentum Joint Shrinking".☆41Updated 4 months ago
- The Official Implementation of Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference☆35Updated this week
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆40Updated last month
- ☆74Updated 4 months ago
- ☆31Updated 2 months ago
- EE-LLM is a framework for large-scale training and inference of early-exit (EE) large language models (LLMs).☆47Updated 5 months ago
- Quantized Side Tuning: Fast and Memory-Efficient Tuning of Quantized Large Language Models☆35Updated last week
- ☆46Updated last year
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆123Updated this week
- Code for https://arxiv.org/abs/2401.17139 (NeurIPS 2024)☆23Updated this week
- Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.☆96Updated this week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆146Updated 4 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆33Updated 3 weeks ago