HRajoliN / LLM-Alignment-Project

A comprehensive template for aligning large language models (LLMs) using Reinforcement Learning from Human Feedback (RLHF), transfer learning, and more. Build your own customizable LLM alignment solution with ease.
15Updated 2 months ago

Alternatives and similar repositories for LLM-Alignment-Project:

Users that are interested in LLM-Alignment-Project are comparing it to the libraries listed below