GanjinZero / RRHFLinks
[NIPS2023] RRHF & Wombat
☆811Updated last year
Alternatives and similar repositories for RRHF
Users that are interested in RRHF are comparing it to the libraries listed below
Sorting:
- ☆460Updated last year
- ☆760Updated last year
- ☆922Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆822Updated last year
- LOMO: LOw-Memory Optimization☆989Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,620Updated last year
- Naive Bayes-based Context Extension☆325Updated 8 months ago
- Collaborative Training of Large Language Models in an Efficient Way☆418Updated 11 months ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- ☆280Updated last year
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆769Updated 2 years ago
- ☆908Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆569Updated last year
- Multi-agent Social Simulation + Efficient, Effective, and Stable alternative of RLHF. Code for the paper "Training Socially Aligned Langu…☆352Updated 2 years ago
- Crosslingual Generalization through Multitask Finetuning☆537Updated 10 months ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,185Updated last year
- A full pipeline to finetune Vicuna LLM with LoRA and RLHF on consumer hardware. Implementation of RLHF (Reinforcement Learning with Human…☆219Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆547Updated last year
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆397Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆114Updated 2 years ago
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆340Updated last year
- Secrets of RLHF in Large Language Models Part I: PPO☆1,388Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆506Updated last year
- Large Language Models Are Reasoning Teachers (ACL 2023)☆341Updated 5 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆561Updated 8 months ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆604Updated 2 months ago
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,032Updated 10 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆346Updated last year
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆498Updated last year
- GPT-Fathom is an open-source and reproducible LLM evaluation suite, benchmarking 10+ leading open-source and closed-source LLMs as well a…☆350Updated last year