UIC-Liu-Lab / ContinualLM
An Extensible Continual Learning Framework Focused on Language Models (LMs)
☆266Updated last year
Alternatives and similar repositories for ContinualLM:
Users that are interested in ContinualLM are comparing it to the libraries listed below
- Must-read Papers on Large Language Model (LLM) Continual Learning☆141Updated last year
- ☆251Updated last year
- RewardBench: the first evaluation tool for reward models.☆508Updated this week
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆464Updated last month
- All available datasets for Instruction Tuning of Large Language Models☆242Updated last year
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆293Updated 5 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆412Updated 4 months ago
- ☆166Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆499Updated 3 weeks ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆312Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆614Updated 7 months ago
- A Survey on Data Selection for Language Models☆210Updated 4 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆541Updated 11 months ago
- Data and Code for Program of Thoughts (TMLR 2023)☆259Updated 9 months ago
- contrastive decoding☆193Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆241Updated 10 months ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆389Updated 9 months ago
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆160Updated 2 months ago
- A curated list of awesome instruction tuning datasets, models, papers and repositories.☆325Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆333Updated last year
- Generative Judge for Evaluating Alignment☆228Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆344Updated last year
- Self-Alignment with Principle-Following Reward Models☆154Updated 11 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆329Updated last year
- ☆258Updated 6 months ago
- Continual Learning of Large Language Models: A Comprehensive Survey☆344Updated 3 weeks ago
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆64Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆546Updated last year
- ☆154Updated 8 months ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Model…☆265Updated 2 years ago