MohammadrezaBanaei / LoRA-XS
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters
☆20Updated last week
Related projects ⓘ
Alternatives and complementary repositories for LoRA-XS
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- Awesome-Low-Rank-Adaptation☆34Updated last month
- ☆35Updated 3 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆71Updated last week
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆28Updated last month
- CLIP-MoE: Mixture of Experts for CLIP☆17Updated last month
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆44Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆49Updated 2 weeks ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆16Updated last month
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆59Updated 6 months ago
- Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"☆19Updated 7 months ago
- The source code of the EMNLP 2023 main conference paper: Sparse Low-rank Adaptation of Pre-trained Language Models.☆69Updated 8 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆30Updated 3 weeks ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆36Updated 7 months ago
- [ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.…☆37Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆71Updated 3 weeks ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Models☆45Updated last month
- Dataset pruning for ImageNet and LAION-2B.☆68Updated 4 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆121Updated 8 months ago
- Code for T-MARS data filtering☆35Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.☆25Updated last year
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆29Updated last year
- source code for paper "Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models"☆17Updated 4 months ago
- This is the repository for "Model Merging by Uncertainty-Based Gradient Matching", ICLR 2024.☆20Updated 5 months ago
- EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning (ACL 2023)☆20Updated last year
- Official Repository of "On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers" (Visual Continual Learnin…☆8Updated 10 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆13Updated 2 months ago
- A curated list of Model Merging methods.☆82Updated last month