Outsider565 / LoRA-GA
☆192Updated 6 months ago
Alternatives and similar repositories for LoRA-GA:
Users that are interested in LoRA-GA are comparing it to the libraries listed below
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆139Updated 2 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆347Updated 2 months ago
- ☆132Updated 9 months ago
- ☆172Updated 9 months ago
- ☆99Updated 9 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆112Updated 2 weeks ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆320Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆323Updated 11 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆188Updated 4 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆160Updated 9 months ago
- ☆179Updated last week
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆133Updated last month
- ☆144Updated 7 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆65Updated 2 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆92Updated 5 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆156Updated 8 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆115Updated 2 weeks ago
- ☆90Updated 3 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆363Updated 3 months ago
- qwen-nsa☆57Updated 2 weeks ago
- Rectified Rotary Position Embeddings☆366Updated 11 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆118Updated 5 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆75Updated last year
- ☆30Updated 4 months ago
- ☆187Updated 2 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆81Updated 4 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆114Updated 2 weeks ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models☆117Updated last week
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 3 months ago
- [ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retenti…☆65Updated last year