BorealisAI / flora-opt
This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.
☆104Updated 10 months ago
Alternatives and similar repositories for flora-opt:
Users that are interested in flora-opt are comparing it to the libraries listed below
- ☆67Updated 9 months ago
- Token Omission Via Attention☆126Updated 6 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆123Updated 8 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆105Updated this week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆97Updated 7 months ago
- ☆125Updated last year
- ☆80Updated 5 months ago
- ☆176Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆215Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆231Updated 3 months ago
- ☆78Updated 8 months ago
- ☆198Updated 5 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆159Updated last year
- EvaByte: Efficient Byte-level Language Models at Scale☆91Updated 2 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆165Updated 4 months ago
- ☆186Updated this week
- Model Stock: All we need is just a few fine-tuned models☆113Updated 7 months ago
- Official Implementation for the paper "d1: Scaling Reasoning in Diffusion Large Language Models via Reinforcement Learning"☆100Updated 2 weeks ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆232Updated 2 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- Mixture of A Million Experts☆44Updated 9 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 7 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆158Updated 10 months ago
- Tiny re-implementation of MDM in style of LLaDA and nano-gpt speedrun☆49Updated last month
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆66Updated 6 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆91Updated last week
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆82Updated 11 months ago
- ☆220Updated 10 months ago