jackaduma / Alpaca-LoRA-RLHF-PyTorchLinks
A full pipeline to finetune Alpaca LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human Feedback) on top of the Alpaca architecture. Basically ChatGPT but with Alpaca
☆60Updated 2 years ago
Alternatives and similar repositories for Alpaca-LoRA-RLHF-PyTorch
Users that are interested in Alpaca-LoRA-RLHF-PyTorch are comparing it to the libraries listed below
Sorting:
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆89Updated last year
- Unofficial implementation of AlpaGasus☆93Updated 2 years ago
- On Transferability of Prompt Tuning for Natural Language Processing☆100Updated last year
- ⚡Research papers about leveraging the capabilities of language models⚡☆52Updated 2 years ago
- ☆142Updated 2 years ago
- Code for ACL2023 paper: Pre-Training to Learn in Context☆106Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.☆155Updated 2 years ago
- ☆173Updated 2 years ago
- ☆74Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆248Updated 2 years ago
- [AAAI 2024] Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following☆78Updated last year
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- Source code for the paper "Active Prompting with Chain-of-Thought for Large Language Models"☆247Updated last year
- ☆56Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated 2 years ago
- ☆35Updated 2 years ago
- Reverse Instructions to generate instruction tuning data with corpus examples☆214Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆138Updated 6 months ago
- This repository is the official implementation of our paper MVP: Multi-task Supervised Pre-training for Natural Language Generation.☆73Updated 3 years ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆83Updated last year
- Scripts for generating synthetic finetuning data for reducing sycophancy.☆117Updated 2 years ago
- [NeurIPS 2023 Main Track] This is the repository for the paper titled "Don’t Stop Pretraining? Make Prompt-based Fine-tuning Powerful Lea…☆76Updated last year
- a Fine-tuned LLaMA that is Good at Arithmetic Tasks☆178Updated 2 years ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆99Updated 2 years ago
- Scripts for fine-tuning Llama2 via SFT and DPO.☆205Updated 2 years ago
- A dataset for training/evaluating Question Answering Retrieval models on ChatGPT responses with the possibility to training/evaluating on…☆141Updated last year
- [NAACL 2024] Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models☆86Updated last year
- ☆98Updated 2 years ago
- Code and data for "Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation" (EMNLP 2023)☆64Updated last year
- the instructions and demonstrations for building a formal logical reasoning capable GLM☆55Updated last year