dropbox / low-rank-llama2Links
Low-Rank Llama Custom Training
☆23Updated last year
Alternatives and similar repositories for low-rank-llama2
Users that are interested in low-rank-llama2 are comparing it to the libraries listed below
Sorting:
- ☆30Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆79Updated last year
- ☆31Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated 2 years ago
- Low-bit optimizers for PyTorch☆137Updated 2 years ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆84Updated last year
- This repository contains code for the MicroAdam paper.☆22Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- ☆63Updated 7 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆128Updated 7 months ago
- ☆158Updated 11 months ago
- ☆133Updated 7 months ago
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆98Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- ☆157Updated 2 years ago
- ACL 2023☆39Updated 2 years ago
- ☆63Updated 2 years ago
- Muon fsdp 2☆51Updated 5 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Updated 2 years ago
- The evaluation framework for training-free sparse attention in LLMs☆110Updated 3 months ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- ☆40Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆67Updated 6 months ago
- ☆27Updated 9 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆176Updated last year
- ☆150Updated 2 years ago
- Transformers components but in Triton☆34Updated 8 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Updated last year