thunlp / DeltaPapers
Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.
β281Updated last year
Alternatives and similar repositories for DeltaPapers:
Users that are interested in DeltaPapers are comparing it to the libraries listed below
- Paper List for In-context Learning π·β179Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)β523Updated 3 years ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)β1,021Updated 6 months ago
- Paper collections of retrieval-based (augmented) language model.β232Updated 10 months ago
- Papers and Datasets on Instruction Tuning and Following. β¨β¨β¨β486Updated 11 months ago
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or lβ¦β277Updated last year
- Paper List for In-context Learning π·β849Updated 5 months ago
- ICML'2022: Black-Box Tuning for Language-Model-as-a-Service & EMNLP'2022: BBTv2: Towards a Gradient-Free Future with Large Language Modelβ¦β267Updated 2 years ago
- β170Updated 8 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learningβ161Updated last year
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Modelsβ204Updated last year
- A paper list about diffusion models for natural language processing.β182Updated last year
- A repository sharing the literatures about long-context large language models, including the methodologies and the evaluation benchmarksβ260Updated 7 months ago
- β345Updated 3 years ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuningβ424Updated 5 months ago
- Collection of training data management explorations for large language modelsβ315Updated 7 months ago
- [SIGIR'24] The official implementation code of MOELoRA.β153Updated 8 months ago
- Collaborative Training of Large Language Models in an Efficient Wayβ413Updated 7 months ago
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).β309Updated last year
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>β334Updated 11 months ago
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`β166Updated 3 months ago
- A Survey on Data Selection for Language Modelsβ218Updated 5 months ago
- Awesome papers on Language-Model-as-a-Service (LMaaS)β556Updated 10 months ago
- β318Updated 8 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Modelsβ251Updated 6 months ago
- Papers & Works for large languange models (OpenAI GPT-4, Meta Llama, etc.).β310Updated 2 weeks ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasetsβ316Updated last year
- [NIPS2023] RRHF & Wombatβ804Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.β554Updated last year
- Must-read papers, related blogs and API tools on the pre-training and tuning methods for ChatGPT.β319Updated last year