UIC-Liu-Lab / ContinualLM
An Extensible Continual Learning Framework Focused on Language Models (LMs)
☆277Updated last year
Alternatives and similar repositories for ContinualLM
Users that are interested in ContinualLM are comparing it to the libraries listed below
Sorting:
- Must-read Papers on Large Language Model (LLM) Continual Learning☆141Updated last year
- RewardBench: the first evaluation tool for reward models.☆566Updated last week
- DSIR large-scale data selection framework for language model training☆247Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆324Updated last year
- ☆177Updated last year
- ☆257Updated last year
- A Survey on Data Selection for Language Models☆230Updated 2 weeks ago
- All available datasets for Instruction Tuning of Large Language Models☆250Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆484Updated 4 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆261Updated 8 months ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆304Updated 8 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆443Updated 6 months ago
- Data and Code for Program of Thoughts (TMLR 2023)☆272Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆130Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆395Updated 11 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆240Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆343Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated last week
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆205Updated 2 years ago
- Continual Learning of Large Language Models: A Comprehensive Survey☆400Updated last month
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆411Updated last year
- Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467☆285Updated 3 months ago
- ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings - NeurIPS 2023 (oral)☆262Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆522Updated 3 months ago
- contrastive decoding☆199Updated 2 years ago
- A Survey of Attributions for Large Language Models☆201Updated 8 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆554Updated 5 months ago
- Simple next-token-prediction for RLHF☆225Updated last year
- ☆279Updated last year