mobiusml / low-rank-llama2
Low-Rank Llama Custom Training
☆22Updated last year
Alternatives and similar repositories for low-rank-llama2:
Users that are interested in low-rank-llama2 are comparing it to the libraries listed below
- ☆24Updated 4 months ago
- ☆29Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year
- ☆24Updated 8 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆80Updated 4 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 9 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆71Updated 6 months ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆17Updated 3 weeks ago
- ACL 2023☆39Updated last year
- 16-fold memory access reduction with nearly no loss☆86Updated last week
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆30Updated 9 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆59Updated 5 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆47Updated 8 months ago
- ☆122Updated last month
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 9 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti …☆63Updated 11 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆45Updated 4 months ago
- Official Repo for SparseLLM: Global Pruning of LLMs (NeurIPS 2024)☆52Updated this week
- AFPQ code implementation☆20Updated last year
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆105Updated 5 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆47Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- [ICLR 2025] TidalDecode: A Fast and Accurate LLM Decoding with Position Persistent Sparse Attention☆31Updated this week
- ☆36Updated 7 months ago
- ☆11Updated 7 months ago
- [EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization☆33Updated 6 months ago
- ☆50Updated last year