SprocketLab / sparse_matrix_fine_tuningLinks
Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"
☆20Updated 2 months ago
Alternatives and similar repositories for sparse_matrix_fine_tuning
Users that are interested in sparse_matrix_fine_tuning are comparing it to the libraries listed below
Sorting:
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆32Updated last year
- Linear Attention Sequence Parallelism (LASP)☆85Updated last year
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆50Updated last year
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆41Updated last year
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆18Updated last month
- ☆19Updated 7 months ago
- The evaluation framework for training-free sparse attention in LLMs☆86Updated last month
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆23Updated 5 months ago
- Here we will test various linear attention designs.☆62Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- A repository for research on medium sized language models.☆78Updated last year
- ☆51Updated last month
- DPO, but faster 🚀☆43Updated 7 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆51Updated 4 months ago
- ☆82Updated 6 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago
- ☆52Updated last year
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆23Updated 8 months ago
- Triton implement of bi-directional (non-causal) linear attention☆52Updated 6 months ago
- ☆32Updated last year
- ☆34Updated 4 months ago
- TerDiT: Ternary Diffusion Models with Transformers☆71Updated last year
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆61Updated 2 years ago
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 8 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆33Updated 11 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆35Updated last year