GaryYufei / AlignLLMHumanSurveyLinks
Aligning Large Language Models with Human: A Survey
☆733Updated 2 years ago
Alternatives and similar repositories for AlignLLMHumanSurvey
Users that are interested in AlignLLMHumanSurvey are comparing it to the libraries listed below
Sorting:
- Reading list of hallucination in LLMs. Check out our new survey paper: "Siren’s Song in the AI Ocean: A Survey on Hallucination in Large …☆1,046Updated last week
- Papers and Datasets on Instruction Tuning and Following. ✨✨✨☆500Updated last year
- [ACL 2023] Reasoning with Language Model Prompting: A Survey☆983Updated 4 months ago
- Reading list of Instruction-tuning. A trend starts from Natrural-Instruction (ACL 2022), FLAN (ICLR 2022) and T0 (ICLR 2022).☆770Updated 2 years ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆549Updated 11 months ago
- The papers are organized according to our survey: Evaluating Large Language Models: A Comprehensive Survey.☆780Updated last year
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,564Updated 4 months ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆379Updated 2 years ago
- Paper List for In-context Learning 🌷☆865Updated 11 months ago
- This repository contains a collection of papers and resources on Reasoning in Large Language Models.☆564Updated last year
- Must-read Papers on Knowledge Editing for Large Language Models.☆1,167Updated 2 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆514Updated last year
- OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.☆574Updated 2 years ago
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,202Updated last year
- papers related to LLM-agent that published on top conferences☆319Updated 5 months ago
- Prod Env☆430Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆515Updated last year
- ☆924Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆888Updated this week
- ☆909Updated last year
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆570Updated 9 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆517Updated 8 months ago
- An Awesome Collection for LLM Survey☆378Updated 4 months ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆497Updated 11 months ago
- The repository for the survey paper <<Survey on Large Language Models Factuality: Knowledge, Retrieval and Domain-Specificity>>☆340Updated last year
- LLM hallucination paper list☆323Updated last year
- A curated list of awesome instruction tuning datasets, models, papers and repositories.☆339Updated 2 years ago
- RewardBench: the first evaluation tool for reward models.☆640Updated 3 months ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆547Updated last year
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆742Updated last year