qianlima-lab / awesome-lifelong-learning-methods-for-llm
This repository collects awesome survey, resource, and paper for Lifelong Learning for Large Language Models. (Updated Regularly)
☆31Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for awesome-lifelong-learning-methods-for-llm
- ☆116Updated 3 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆100Updated 2 weeks ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆23Updated last month
- ☆147Updated 4 months ago
- This repository collects awesome survey, resource, and paper for Lifelong Learning with Large Language Models. (Updated Regularly)☆31Updated 2 weeks ago
- [SIGIR'24] The official implementation code of MOELoRA.☆124Updated 3 months ago
- [ACL2024] A Codebase for Incremental Learning with Large Language Models; Official released code for "Learn or Recall? Revisiting Increme…☆20Updated last month
- ☆71Updated 10 months ago
- ☆24Updated 8 months ago
- ☆64Updated 2 months ago
- UniGen: A Unified Framework for Dataset Generation via Large Language Model☆29Updated last month
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆59Updated 9 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆86Updated 2 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆230Updated 6 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆15Updated 6 months ago
- Survey on Data-centric Large Language Models☆65Updated 4 months ago
- ☆37Updated 5 months ago
- Continual Learning of Large Language Models: A Comprehensive Survey☆252Updated last week
- Code for `Iterative Tool Learning from Introspection Feedback by Easy-to-Difficult Curriculum`☆17Updated 7 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆29Updated 7 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆92Updated 2 months ago
- ☆76Updated 4 months ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆134Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆119Updated last week
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆29Updated 3 weeks ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆70Updated 9 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆22Updated this week
- Paper list and datasets for the paper: A Survey on Data Selection for LLM Instruction Tuning☆33Updated 9 months ago
- A Survey on the Honesty of Large Language Models☆46Updated last month
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆44Updated this week