Walter0807 / RepBeliefLinks
[ICML 2024] Language Models Represent Beliefs of Self and Others
☆33Updated 9 months ago
Alternatives and similar repositories for RepBelief
Users that are interested in RepBelief are comparing it to the libraries listed below
Sorting:
- Sotopia-π: Interactive Learning of Socially Intelligent Language Agents (ACL 2024)☆69Updated last year
- ☆46Updated 8 months ago
- ☆19Updated last year
- Directional Preference Alignment☆58Updated 9 months ago
- ☆30Updated last year
- ☆41Updated 8 months ago
- ☆131Updated last year
- Official Repository of LatentSeek☆51Updated last month
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 3 months ago
- ☆22Updated 2 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆44Updated last year
- The Good, The Bad, and The Greedy: Evaluation of LLMs Should Not Ignore Non-Determinism☆30Updated last year
- [ACL 2024] Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models☆21Updated last year
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆146Updated 8 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆108Updated last year
- [ACL 2024] The project of Symbol-LLM☆56Updated last year
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- Reinforced Multi-LLM Agents training☆30Updated last month
- ☆32Updated 2 years ago
- This is the repository for paper EscapeBench: Pushing Language Models to Think Outside the Box☆14Updated 6 months ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- ☆26Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated last week
- ☆48Updated last month
- ☆27Updated last year
- implementation of paper "Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners"☆20Updated last year
- RENT (Reinforcement Learning via Entropy Minimization) is an unsupervised method for training reasoning LLMs.☆31Updated last week
- Official code for the paper: WALL-E: World Alignment by NeuroSymbolic Learning improves World Model-based LLM Agents☆39Updated 2 months ago
- Code for LaMPP: Language Models as Probabilistic Priors for Perception and Action☆37Updated 2 years ago