dropbox / low-rank-llama2Links
Low-Rank Llama Custom Training
☆23Updated last year
Alternatives and similar repositories for low-rank-llama2
Users that are interested in low-rank-llama2 are comparing it to the libraries listed below
Sorting:
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆28Updated 11 months ago
- ☆130Updated 4 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- [ICML24] Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs☆94Updated 11 months ago
- ☆156Updated 2 years ago
- ☆145Updated 8 months ago
- ☆30Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆83Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆169Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- ☆36Updated last year
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- Here we will test various linear attention designs.☆61Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆46Updated 3 months ago
- ACL 2023☆39Updated 2 years ago
- Low-bit optimizers for PyTorch☆132Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- PB-LLM: Partially Binarized Large Language Models☆156Updated last year
- ☆61Updated 2 years ago
- ☆112Updated last year
- ☆25Updated 6 months ago
- This repository contains code for the MicroAdam paper.☆19Updated 10 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆195Updated 4 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆104Updated last year
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated last month
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆119Updated 4 months ago
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆80Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago