circle-hit / SAPTLinks
Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language Models"
☆36Updated 10 months ago
Alternatives and similar repositories for SAPT
Users that are interested in SAPT are comparing it to the libraries listed below
Sorting:
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆33Updated 9 months ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆81Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆92Updated last month
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆55Updated 6 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- ☆192Updated last year
- ☆168Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆44Updated 5 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆137Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆167Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated 11 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆86Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆185Updated last year
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 3 months ago
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Official code for our paper "Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models"☆19Updated last month
- ☆25Updated 2 years ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆148Updated 2 years ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆24Updated last year
- A method of ensemble learning for heterogeneous large language models.☆64Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆63Updated last year
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆60Updated 6 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆70Updated 4 months ago
- ☆47Updated last year
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆193Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆87Updated 9 months ago
- my commonly-used tools☆63Updated 10 months ago