mobiusml / low-rank-llama2Links
Low-Rank Llama Custom Training
☆23Updated last year
Alternatives and similar repositories for low-rank-llama2
Users that are interested in low-rank-llama2 are comparing it to the libraries listed below
Sorting:
- Odysseus: Playground of LLM Sequence Parallelism☆76Updated last year
- ☆140Updated 6 months ago
- ☆123Updated 3 months ago
- Low-bit optimizers for PyTorch☆130Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Kinetics: Rethinking Test-Time Scaling Laws☆79Updated last month
- ☆32Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆114Updated 2 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Fast and memory-efficient exact attention☆70Updated 5 months ago
- ☆52Updated 2 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆76Updated 10 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆120Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆174Updated 2 months ago
- ☆29Updated last year
- [ICML 2025] SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity☆51Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 11 months ago
- ☆29Updated 9 months ago
- Transformers components but in Triton☆34Updated 3 months ago
- ☆154Updated 2 years ago
- This repository contains code for the MicroAdam paper.☆19Updated 8 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year
- ☆22Updated 5 months ago
- PB-LLM: Partially Binarized Large Language Models☆153Updated last year
- Linear Attention Sequence Parallelism (LASP)☆86Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆92Updated 9 months ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆47Updated last month
- Muon fsdp 2☆42Updated 3 weeks ago