Persdre / awesome-llm-human-simulationLinks
[ICLR 2025 BlogPost] LLM Simulating Humanity Papers https://arxiv.org/abs/2501.08579
☆43Updated 3 months ago
Alternatives and similar repositories for awesome-llm-human-simulation
Users that are interested in awesome-llm-human-simulation are comparing it to the libraries listed below
Sorting:
- ☆33Updated 10 months ago
- A collection of resources that investigate social agents.☆175Updated 4 months ago
- ☆172Updated last month
- Implementation of the MATRIX framework (ICML 2024)☆58Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆130Updated last year
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆119Updated 6 months ago
- Codebase for reproducing the experiments of the semantic uncertainty paper (short-phrase and sentence-length experiments).☆355Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 5 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆99Updated 3 months ago
- [NeurIPS 2024] Knowledge Circuits in Pretrained Transformers☆153Updated 6 months ago
- A collection of works that investigate social agents, simulations and their real-world impact in text, embodied, and robotics contexts.☆95Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆26Updated 11 months ago
- Using Explanations as a Tool for Advanced LLMs☆67Updated 11 months ago
- ☆24Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆158Updated 5 months ago
- ☆238Updated last year
- This repository contains the code and data for the paper "SelfIE: Self-Interpretation of Large Language Model Embeddings" by Haozhe Chen,…☆50Updated 8 months ago
- This repo contains code for paper: "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach".☆21Updated 10 months ago
- The Prism Alignment Project☆79Updated last year
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆33Updated 2 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆154Updated last year
- ☆132Updated 5 months ago
- Governance of the Commons Simulation (GovSim)☆57Updated 7 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆114Updated last year
- A curated list of resources for activation engineering☆100Updated 3 months ago
- ☆151Updated last year
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆265Updated 5 months ago
- ☆100Updated 10 months ago
- ☆26Updated 4 months ago
- A banchmark list for evaluation of large language models.☆137Updated last week