UIC-Liu-Lab / ContinualLMLinks
An Extensible Continual Learning Framework Focused on Language Models (LMs)
☆280Updated last year
Alternatives and similar repositories for ContinualLM
Users that are interested in ContinualLM are comparing it to the libraries listed below
Sorting:
- Must-read Papers on Large Language Model (LLM) Continual Learning☆141Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆396Updated last year
- ☆259Updated last year
- A Survey on Data Selection for Language Models☆234Updated last month
- RewardBench: the first evaluation tool for reward models.☆590Updated this week
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆493Updated 4 months ago
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- DSIR large-scale data selection framework for language model training☆249Updated last year
- ☆179Updated last year
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark