OpenLMLab / MOSS-RLHFLinks
Secrets of RLHF in Large Language Models Part I: PPO
☆1,399Updated last year
Alternatives and similar repositories for MOSS-RLHF
Users that are interested in MOSS-RLHF are comparing it to the libraries listed below
Sorting:
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,544Updated last month
- Recipes to train reward model for RLHF.☆1,470Updated 6 months ago
- [NIPS2023] RRHF & Wombat☆811Updated 2 years ago
- ☆908Updated last year
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆2,058Updated 2 years ago
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,615Updated 2 years ago
- A recipe for online RLHF and online iterative DPO.☆533Updated 9 months ago
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆923Updated 8 months ago
- Aligning Large Language Models with Human: A Survey☆737Updated 2 years ago
- Reference implementation for DPO (Direct Preference Optimization)☆2,760Updated last year
- ☆922Updated last year
- ☆767Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆826Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆994Updated 10 months ago
- ☆548Updated 9 months ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,686Updated last year
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,370Updated 2 years ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,201Updated last year
- A plug-and-play library for parameter-efficient-tuning (Delta Tuning)☆1,037Updated last year
- Panda项目是于2023年5月启动的开源海外中文大语言模型项目,致力于大模型时代探索整个技术栈,旨在推动中文自然语言处理领域的创新和合作。☆1,038Updated 2 years ago
- O1 Replication Journey☆2,002Updated 9 months ago
- OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models☆1,823Updated 9 months ago
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,885Updated 2 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,574Updated 4 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆890Updated 3 weeks ago
- AgentTuning: Enabling Generalized Agent Abilities for LLMs☆1,465Updated last year
- Paper List for In-context Learning 🌷☆868Updated last year
- minimal-cost for training 0.5B R1-Zero☆778Updated 5 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆572Updated 10 months ago
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"