mobiusml / low-rank-llama2Links
Low-Rank Llama Custom Training
☆23Updated last year
Alternatives and similar repositories for low-rank-llama2
Users that are interested in low-rank-llama2 are comparing it to the libraries listed below
Sorting:
- ☆29Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆75Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆91Updated 8 months ago
- ☆137Updated 5 months ago
- ☆123Updated 2 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- ☆33Updated last year
- ☆27Updated 9 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- Low-bit optimizers for PyTorch☆130Updated last year
- ☆154Updated 2 years ago
- ☆51Updated 2 months ago
- This repository contains code for the MicroAdam paper.☆19Updated 7 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆50Updated 8 months ago
- ☆59Updated last year
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆49Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆110Updated last month
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆43Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 11 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆70Updated last month
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆74Updated 9 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year
- PB-LLM: Partially Binarized Large Language Models☆153Updated last year
- Are gradient information useful for pruning of LLMs?☆46Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆152Updated last month
- ☆21Updated 4 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆43Updated last month
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆222Updated this week
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆119Updated last year