josh-ashkinaze / pluralsLinks
Plurals: A System for Guiding LLMs Via Simulated Social Ensembles
☆28Updated 3 months ago
Alternatives and similar repositories for plurals
Users that are interested in plurals are comparing it to the libraries listed below
Sorting:
- ☆96Updated last year
- The Prism Alignment Project☆79Updated last year
- Data exports from select "open data" Polis conversations☆42Updated 11 months ago
- Code for "Using Large Language Models to Simulate Multiple Humans and Replicate Human Subject Studies"☆36Updated last year
- Code for "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs"☆55Updated 7 months ago
- Concept Induction: Analyzing Unstructured Text with High-Level Concepts Using LLooM (CHI 2024 paper). LLooM automatically surfaces high-l…☆126Updated 3 months ago
- ☆300Updated last year
- Edu-ConvoKit: An Open-Source Framework for Education Conversation Data☆100Updated 5 months ago
- Repo for the paper "Detecting Logical Fallacies: From Quiz to Climate Change News" (2021)☆79Updated last year
- ☆108Updated 7 months ago
- A dynamic forecasting benchmark for LLMs☆30Updated 3 weeks ago
- ☆249Updated 6 months ago
- ☆115Updated last year
- Open source version of Anthropic's Clio: A system for privacy-preserving insights into real-world AI use☆44Updated last month
- ☆74Updated last year
- ☆54Updated 3 months ago
- We develop benchmarks and analysis tools to evaluate the causal reasoning abilities of LLMs.☆126Updated last year
- ☆39Updated 11 months ago
- Aligning AI With Shared Human Values (ICLR 2021)☆298Updated 2 years ago
- Evaluating the Moral Beliefs Encoded in LLMs☆28Updated 9 months ago
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆92Updated last year
- A repository for the paper "Beliefs about AI influence human-AI interaction and can be manipulated to increase perceived trustworthiness,…☆17Updated last year
- Governance of the Commons Simulation (GovSim)☆59Updated 8 months ago
- A mechanistic approach for understanding and detecting factual errors of large language models.☆47Updated last year
- Get answers to research questions from 200M+ papers. Link to demo -☆206Updated last year
- Accompanying code and SEP dataset for the "Can LLMs Separate Instructions From Data? And What Do We Even Mean By That?" paper.☆55Updated 6 months ago
- In situ interactive widgets for responsible AI 🌱☆27Updated last year
- Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment☆38Updated 2 years ago
- ☆23Updated last year
- ☆29Updated 2 years ago