thunlp / OpenDeltaLinks
A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
☆1,032Updated 10 months ago
Alternatives and similar repositories for OpenDelta
Users that are interested in OpenDelta are comparing it to the libraries listed below
Sorting:
- A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.☆934Updated 2 years ago
- Prefix-Tuning: Optimizing Continuous Prompts for Generation☆942Updated last year
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆285Updated 2 years ago
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,052Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,184Updated last year
- ☆908Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,619Updated last year
- [NIPS2023] RRHF & Wombat☆811Updated last year
- ☆920Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆535Updated 3 years ago
- Paper List for In-context Learning 🌷☆857Updated 9 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆604Updated 2 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆417Updated 11 months ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,078Updated 11 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,033Updated 8 months ago
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆498Updated last year
- Efficient, Low-Resource, Distributed transformer implementation based on BMTrain☆258Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆284Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆769Updated 2 years ago
- ☆459Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,384Updated last year
- ☆758Updated last year
- A curated list of research papers in Sentence Reprsentation Learning and a sts leaderboard of sentence embeddings.☆315Updated last year
- Paper List for In-context Learning 🌷☆183Updated last year
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,512Updated last year
- Resource, Evaluation and Detection Papers for ChatGPT☆458Updated last year
- 简单易懂的LLaMA微调指南。☆401Updated 2 years ago
- ☆397Updated 3 years ago
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆569Updated last year
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,761Updated last year