SJTU-DeepVisionLab / FLoRALinks
☆40Updated last year
Alternatives and similar repositories for FLoRA
Users that are interested in FLoRA are comparing it to the libraries listed below
Sorting:
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆58Updated last month
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆127Updated 4 months ago
- ☆112Updated last year
- ☆91Updated 2 years ago
- CLIP-MoE: Mixture of Experts for CLIP☆42Updated 9 months ago
- [CVPR 2024] The official pytorch implementation of "A General and Efficient Training for Transformer via Token Expansion".☆44Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆153Updated last month
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆88Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆57Updated 9 months ago
- toy reproduction of Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts☆19Updated 11 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆83Updated 2 months ago
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆49Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆46Updated 7 months ago
- ☆147Updated 11 months ago
- [MM2024, oral] "Self-Supervised Visual Preference Alignment" https://arxiv.org/abs/2404.10501☆56Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆123Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆29Updated last year
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆30Updated 2 months ago
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆39Updated 9 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆42Updated last month
- Collect papers about Mamba (a selective state space model).☆14Updated last year
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆27Updated 3 weeks ago
- ☆135Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆69Updated 3 months ago
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆107Updated 11 months ago
- Official repository of Polarity-aware Linear Attention for Vision Transformers (ICLR 2025)☆68Updated 2 months ago
- Official repository of InLine attention (NeurIPS 2024)☆52Updated 7 months ago
- ☆119Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆80Updated last year
- ML-Mamba: Efficient Multi-Modal Large Language Model Utilizing Mamba-2☆66Updated 8 months ago