This is the official repository for the paper "Flora: Low-Rank Adapters Are Secretly Gradient Compressors" in ICML 2024.
☆106Jul 1, 2024Updated last year
Alternatives and similar repositories for flora-opt
Users that are interested in flora-opt are comparing it to the libraries listed below
Sorting:
- ☆82Nov 11, 2024Updated last year
- An official implementation for the EMNLP 2023 Findings paper "Prompt-Based Editing for Text Style Transfer"☆13Dec 9, 2023Updated 2 years ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,684Oct 28, 2024Updated last year
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆38Feb 27, 2024Updated 2 years ago
- [ICML‘24] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆124Jul 6, 2025Updated 8 months ago
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆34Jun 20, 2024Updated last year
- The official repo for the paper "Teacher Forcing Recovers Reward Functions for Text Generation"☆31May 27, 2023Updated 2 years ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆59Dec 1, 2024Updated last year
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆118Oct 21, 2024Updated last year
- Pytorch2Jax is a small Python library that provides functions that wraps PyTorch models into Jax functions and Flax modules.☆21Feb 20, 2023Updated 3 years ago
- ☆13Jan 15, 2025Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆144Apr 8, 2025Updated 11 months ago
- ☆71Jul 11, 2024Updated last year
- Official repository for the paper "NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks". This rep…☆60Oct 31, 2024Updated last year
- new optimizer☆20Aug 4, 2024Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)☆24Mar 18, 2021Updated 5 years ago
- ☆17Dec 16, 2024Updated last year
- Official implementation of the transformer (TF) architecture suggested in a paper entitled "Looped Transformers as Programmable Computers…☆37Apr 8, 2023Updated 2 years ago
- ☆126Jul 6, 2024Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Mar 5, 2024Updated 2 years ago
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Apr 28, 2023Updated 2 years ago
- CompChomper is a framework for measuring how LLMs perform at code completion.☆21Apr 29, 2025Updated 10 months ago
- Extend the Conditioning of Stable Diffusion to take Audio Embeddings Instead of Text Embeddings using Wav2Vec2-BERT model☆13Sep 25, 2024Updated last year
- ☆14Mar 31, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31May 22, 2024Updated last year
- Simple LLM inference server☆20Jun 13, 2024Updated last year
- Implementation of our paper "Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation" in EMNLP-2020.☆23Aug 20, 2021Updated 4 years ago
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆84Nov 27, 2024Updated last year
- ☆25Oct 31, 2024Updated last year
- ☆18Mar 18, 2024Updated 2 years ago
- ☆47May 20, 2025Updated 10 months ago
- Official implementation repository for the paper Towards General Conceptual Model Editing via Adversarial Representation Engineering.☆19Dec 6, 2024Updated last year
- Prune transformer layers☆74May 30, 2024Updated last year
- This repository contains code for the MicroAdam paper.☆21Dec 14, 2024Updated last year
- A Field-Theoretic Approach to Unbounded Memory in Large Language Models☆20Apr 15, 2025Updated 11 months ago
- ☆33Jan 14, 2021Updated 5 years ago
- ☆162Feb 15, 2025Updated last year