thunlp / OpenPromptLinks
An Open-Source Framework for Prompt-Learning.
☆4,655Updated 11 months ago
Alternatives and similar repositories for OpenPrompt
Users that are interested in OpenPrompt are comparing it to the libraries listed below
Sorting:
- Must-read papers on prompt-based tuning for pre-trained language models.☆4,232Updated last year
- Toolkit for creating, sharing and using natural language prompts.☆2,890Updated last year
- An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks☆2,044Updated last year
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,047Updated last year
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,029Updated 9 months ago
- Prefix-Tuning: Optimizing Continuous Prompts for Generation☆934Updated last year
- A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.☆933Updated 2 years ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,043Updated last week
- Aligning pretrained language models with instruction data generated by themselves.☆4,408Updated 2 years ago
- Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)☆1,876Updated last year
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,672Updated last year
- [EMNLP 2021] SimCSE: Simple Contrastive Learning of Sentence Embeddings https://arxiv.org/abs/2104.08821☆3,570Updated 8 months ago
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,736Updated 11 months ago
- AutoPrompt: Automatic Prompt Construction for Masked Language Models.☆627Updated 10 months ago
- We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tunin…☆2,752Updated last year
- Instruction Tuning with GPT-4☆4,312Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18,912Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆21,461Updated last month
- A Unified Library for Parameter-Efficient and Modular Transfer Learning☆2,726Updated last month
- Example models using DeepSpeed☆6,554Updated this week
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.☆8,441Updated last month
- Official implementation for "Multimodal Chain-of-Thought Reasoning in Language Models" (stay tuned and more will be updated)☆3,943Updated last year
- A modular RL library to fine-tune language models to human preferences☆2,322Updated last year
- GLM (General Language Model)☆3,240Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,182Updated last year
- BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)☆7,513Updated last month
- ⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡☆2,944Updated last year
- Original Implementation of Prompt Tuning from Lester, et al, 2021☆686Updated 4 months ago
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,885Updated last year