liyongqi2002 / Awesome-Personalized-AlignmentLinks
A curated list of personalized alignment resources (continually updated).
☆30Updated last week
Alternatives and similar repositories for Awesome-Personalized-Alignment
Users that are interested in Awesome-Personalized-Alignment are comparing it to the libraries listed below
Sorting:
- ☆53Updated this week
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆127Updated 9 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆134Updated this week
- 📜 Paper list on decoding methods for LLMs and LVLMs☆52Updated 2 weeks ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆79Updated 2 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated last week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆124Updated 3 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 10 months ago
- ☆141Updated 10 months ago
- Awesome-Long2short-on-LRMs is a collection of state-of-the-art, novel, exciting long2short methods on large reasoning models. It contains…☆233Updated last month
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 7 months ago
- This is a unified platform for implementing and evaluating test-time reasoning mechanisms in Large Language Models (LLMs).☆20Updated 6 months ago
- A collection of survey papers and resources related to Large Language Models (LLMs).☆40Updated last year
- ☆85Updated last month
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆83Updated 2 months ago
- ☆46Updated 7 months ago
- Must-read Papers on Large Language Model (LLM) Continual Learning☆144Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆164Updated last year
- ☆51Updated last month
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated 7 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 9 months ago
- Codes for papers on Large Language Models Personalization (LaMP)☆166Updated 4 months ago
- ☆28Updated last year
- A comprehensive collection of process reward models.☆95Updated 3 weeks ago
- A method of ensemble learning for heterogeneous large language models.☆58Updated 11 months ago
- Public code repo for COLING 2025 paper "Aligning LLMs with Individual Preferences via Interaction"☆32Updated 3 months ago
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆98Updated 11 months ago
- The repo for In-context Autoencoder☆129Updated last year
- ☆74Updated last year
- ☆45Updated last year