cmnfriend / O-LoRA
☆172Updated 9 months ago
Alternatives and similar repositories for O-LoRA:
Users that are interested in O-LoRA are comparing it to the libraries listed below
- ☆132Updated 9 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆160Updated 9 months ago
- ☆192Updated 6 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆323Updated 11 months ago
- ☆99Updated 9 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆34Updated 3 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆67Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆118Updated 5 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆112Updated 2 weeks ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆320Updated last year
- This repository collects awesome survey, resource, and paper for Lifelong Learning for Large Language Models. (Updated Regularly)☆46Updated 2 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆133Updated last month
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆188Updated 4 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆162Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆139Updated 2 months ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆75Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆35Updated last year
- ☆81Updated last year
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆190Updated last week
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆168Updated 10 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆156Updated 8 months ago
- ☆30Updated 4 months ago
- ☆90Updated 3 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆114Updated 2 weeks ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆129Updated 2 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆435Updated 6 months ago
- Implementation for "Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs"☆363Updated 3 months ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆141Updated last year
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆78Updated last year
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆81Updated 4 months ago