mzf666 / LORO-mainLinks
Official implementation of ICLR 2025 'LORO: Parameter and Memory Efficient Pretraining via Low-rank Riemannian Optimization'
☆13Updated 6 months ago
Alternatives and similar repositories for LORO-main
Users that are interested in LORO-main are comparing it to the libraries listed below
Sorting:
- LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently (ICML2025 Oral)☆24Updated last week
 - ICLR 2025☆29Updated 5 months ago
 - SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆35Updated last year
 - [NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections☆21Updated last year
 - source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆32Updated last year
 - Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆16Updated 10 months ago
 - The repo for HiRA paper☆32Updated 3 months ago
 - This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆31Updated last year
 - [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆83Updated last year
 - Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆50Updated last year
 - Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆22Updated last year
 - Official code implementation for 2025 ICLR accepted paper "Dobi-SVD : Differentiable SVD for LLM Compression and Some New Perspectives"☆47Updated 2 weeks ago
 - Official PyTorch implementation of the paper "Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Princ…☆33Updated 3 months ago
 - [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆67Updated last year
 - ☆60Updated 10 months ago
 - an official PyTorch implementation of the paper "Partial Network Cloning", CVPR 2023☆13Updated 2 years ago
 - Awesome-Low-Rank-Adaptation☆118Updated last year
 - Empowering Small VLMs to Think with Dynamic Memorization and Exploration☆15Updated last month
 - A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆157Updated 4 months ago
 - ☆148Updated last year
 - This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆42Updated last year
 - Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆133Updated 6 months ago
 - LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆35Updated last year
 - [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆66Updated last year
 - Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Updated last year
 - dParallel: Learnable Parallel Decoding for dLLMs☆38Updated 2 weeks ago
 - ☆123Updated last year
 - [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆23Updated 7 months ago
 - [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆66Updated last year
 - [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆45Updated last year