SprocketLab / sparse_matrix_fine_tuningLinks
Official repository for ICML 2024 paper "MoRe Fine-Tuning with 10x Fewer Parameters"
☆20Updated last month
Alternatives and similar repositories for sparse_matrix_fine_tuning
Users that are interested in sparse_matrix_fine_tuning are comparing it to the libraries listed below
Sorting:
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆23Updated 8 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- Official implementation of ECCV24 paper: POA☆24Updated 11 months ago
- A WebUI for Side-by-Side Comparison of Media (Images/Videos) Across Multiple Folders☆24Updated 4 months ago
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and video…☆34Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- Linear Attention Sequence Parallelism (LASP)☆85Updated last year
- Triton implement of bi-directional (non-causal) linear attention☆51Updated 5 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆31Updated last year
- DPO, but faster 🚀☆43Updated 7 months ago
- Here we will test various linear attention designs.☆60Updated last year
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 2 months ago
- ☆18Updated 6 months ago
- Official code for the paper "Attention as a Hypernetwork"☆40Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last week
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆49Updated last year
- Official Implementation of APB (ACL 2025 main Oral)☆29Updated 4 months ago
- RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best…☆49Updated 3 months ago
- ☆47Updated last month
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆18Updated last month
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆25Updated 6 months ago
- [ICML2025] LoRA fine-tune directly on the quantized models.☆31Updated 7 months ago
- ☆26Updated last week
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago
- ☆59Updated 3 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆33Updated 4 months ago
- TerDiT: Ternary Diffusion Models with Transformers☆71Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆85Updated 7 months ago
- FocusLLM: Scaling LLM’s Context by Parallel Decoding☆41Updated 7 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆101Updated last year