PAIR-code / deliberate-labLinks
Platform for running online research experiments on human + LLM group dynamics.
β54Updated this week
Alternatives and similar repositories for deliberate-lab
Users that are interested in deliberate-lab are comparing it to the libraries listed below
Sorting:
- β75Updated 8 months ago
- In situ interactive widgets for responsible AI π±β27Updated last year
- examples and guides to using Nomic Atlasβ37Updated 8 months ago
- This repo contains detailed implementation information about Anthropic's paired prompts approach for evaluating political neutrality.β108Updated last month
- Literature Review Made Easy with Visualizationβ65Updated 2 years ago
- β258Updated 8 months ago
- Dataset and annotations for ASSETS 2022 publicationβ12Updated 3 years ago
- Classifiers for "Investigating Affective Use and Emotional Well-being in ChatGPT"β46Updated 5 months ago
- Code for "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs"β85Updated 9 months ago
- β83Updated 11 months ago
- β175Updated 4 months ago
- Accompanying codebase for neuroscope.io, a website for displaying max activating dataset examples for language model neuronsβ13Updated 2 years ago
- A customizable GPT in a single page, using OpenAI models text-embedding-ada-002, tts-1, whisper-1, dall-e-3, and gpt-4-vision-previewβ14Updated last year
- Plurals: A System for Guiding LLMs Via Simulated Social Ensemblesβ29Updated last week
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.β173Updated 3 weeks ago
- Repo for the paper on Escalation Risks of AI systemsβ44Updated last year
- LLM Comparator is an interactive data visualization tool for evaluating and analyzing LLM responses side-by-side, developed by the PAIR tβ¦β501Updated 10 months ago
- β316Updated last year
- β187Updated 5 months ago
- β57Updated last week
- Causal DAG Extraction from Text (DEFT)β66Updated 11 months ago
- Model Alignment is a python library from the PAIR team that enable users to create model prompts through user feedback instead of manual β¦β29Updated last month
- β83Updated last year
- A system that tries to resolve all issues on a github repo with OpenHands.β117Updated last year
- The Foundation Model Transparency Indexβ83Updated 2 weeks ago
- Prompts used in the Automated Auditing Blog Postβ127Updated 5 months ago
- The landscape of biomedical researchβ121Updated 6 months ago
- β29Updated last year
- Lightweight demo using the Anthropic Python SDK to experiment with Claude's Search and Retrieval capabilities over a variety of knowledgeβ¦β177Updated last year
- β49Updated 2 months ago