UIC-Liu-Lab / ContinualLMLinks
An Extensible Continual Learning Framework Focused on Language Models (LMs)
☆294Updated 2 years ago
Alternatives and similar repositories for ContinualLM
Users that are interested in ContinualLM are comparing it to the libraries listed below
Sorting:
- Must-read Papers on Large Language Model (LLM) Continual Learning☆148Updated 2 years ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆328Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆535Updated last year
- DSIR large-scale data selection framework for language model training☆269Updated last year
- ☆273Updated 2 years ago
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆570Updated last year
- Datasets for Instruction Tuning of Large Language Models☆260Updated 2 years ago
- A Survey on Data Selection for Language Models☆253Updated 9 months ago
- ☆209Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆409Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆373Updated 2 years ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆254Updated 2 years ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆350Updated 2 years ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆220Updated 2 years ago
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆227Updated 6 months ago
- RewardBench: the first evaluation tool for reward models.☆685Updated last week
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆551Updated last year
- ☆282Updated last year
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆512Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆667Updated last year
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆386Updated 2 years ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆270Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆361Updated 2 years ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆584Updated 2 years ago
- ☆104Updated 2 years ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated 4 months ago
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆507Updated last year
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆565Updated last year
- [CSUR 2025] Continual Learning of Large Language Models: A Comprehensive Survey☆511Updated last month