thunlp / OpenDelta
A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
☆1,027Updated 7 months ago
Alternatives and similar repositories for OpenDelta:
Users that are interested in OpenDelta are comparing it to the libraries listed below
- A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.☆931Updated 2 years ago
- Must-read Papers of Parameter-Efficient Tuning (Delta Tuning) Methods on Pre-trained Models.☆282Updated last year
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,034Updated last year
- Prefix-Tuning: Optimizing Continuous Prompts for Generation☆923Updated last year
- Implementation of paper "Towards a Unified View of Parameter-Efficient Transfer Learning" (ICLR 2022)☆527Updated 3 years ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,157Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆587Updated 2 weeks ago
- Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo☆1,069Updated 9 months ago
- [NIPS2023] RRHF & Wombat☆807Updated last year
- ☆914Updated 11 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,457Updated 10 months ago
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆492Updated last year
- Collaborative Training of Large Language Models in an Efficient Way☆415Updated 8 months ago
- ☆900Updated 9 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,386Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,621Updated last year
- Efficient Inference for Big Models☆583Updated 2 years ago
- ☆459Updated 11 months ago
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,013Updated 5 months ago
- [ACL 2021] LM-BFF: Better Few-shot Fine-tuning of Language Models https://arxiv.org/abs/2012.15723☆729Updated 2 years ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆768Updated last year
- A collection of phenomenons observed during the scaling of big foundation models, which may be developed into consensus, principles, or l…☆279Updated last year
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆679Updated 2 months ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,741Updated last year
- Paper List for In-context Learning 🌷☆854Updated 7 months ago
- SwissArmyTransformer is a flexible and powerful library to develop your own Transformer variants.☆1,077Updated 4 months ago
- ☆345Updated 3 years ago
- ☆397Updated 3 years ago
- Best practice for training LLaMA models in Megatron-LM☆650Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆960Updated 5 months ago