zzz47zzz / awesome-lifelong-learning-methods-for-llm
This repository collects awesome survey, resource, and paper for Lifelong Learning with Large Language Models. (Updated Regularly)
☆31Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for awesome-lifelong-learning-methods-for-llm
- This repository collects awesome survey, resource, and paper for Lifelong Learning for Large Language Models. (Updated Regularly)☆31Updated 2 weeks ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆59Updated 9 months ago
- ☆116Updated 3 months ago
- [ACL2024] A Codebase for Incremental Learning with Large Language Models; Official released code for "Learn or Recall? Revisiting Increme…☆20Updated last month
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆100Updated 2 weeks ago
- ☆147Updated 4 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆22Updated this week
- [SIGIR'24] The official implementation code of MOELoRA.☆124Updated 3 months ago
- Continual Learning of Large Language Models: A Comprehensive Survey☆252Updated last week
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆92Updated 2 months ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆134Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆63Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆23Updated last month
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆94Updated 7 months ago
- ☆31Updated last year
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆29Updated 3 weeks ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆58Updated 2 years ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆97Updated 7 months ago
- A Survey on the Honesty of Large Language Models☆46Updated last month
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆36Updated this week
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆45Updated 7 months ago
- Code for https://arxiv.org/abs/2401.17139 (NeurIPS 2024)☆25Updated this week
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆33Updated 10 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆44Updated this week
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆53Updated 3 weeks ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆35Updated 7 months ago
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆25Updated 4 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆15Updated 6 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆58Updated 11 months ago
- ☆62Updated 3 weeks ago