feradauto / MoralCoTLinks
Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment
☆38Updated 2 years ago
Alternatives and similar repositories for MoralCoT
Users that are interested in MoralCoT are comparing it to the libraries listed below
Sorting:
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Few-shot Learning with Auxiliary Data☆31Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 2 months ago
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Ca…☆60Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆61Updated last year
- Inspecting and Editing Knowledge Representations in Language Models☆119Updated 2 years ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Le…☆99Updated 4 years ago
- Code of ICLR paper: https://openreview.net/forum?id=-cqvvvb-NkI☆95Updated 2 years ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- ☆56Updated 2 years ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆71Updated 2 years ago
- ☆29Updated last month
- ☆29Updated last year
- ☆52Updated 8 months ago
- ☆36Updated 3 years ago
- ☆95Updated last year
- Resolving Knowledge Conflicts in Large Language Models, COLM 2024☆18Updated 2 months ago
- ☆16Updated last year
- PASTA: Post-hoc Attention Steering for LLMs☆131Updated last year
- Skill-It! A Data-Driven Skills Framework for Understanding and Training Language Models☆47Updated 2 years ago
- This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers☆31Updated last year
- CausalGym: Benchmarking causal interpretability methods on linguistic tasks☆50Updated last year
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- [ICLR 2023] Code for our paper "Selective Annotation Makes Language Models Better Few-Shot Learners"☆111Updated 2 years ago
- SILO Language Models code repository☆83Updated last year
- Finding semantically meaningful and accurate prompts.☆48Updated 2 years ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆113Updated 2 years ago
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- DialOp: Decision-oriented dialogue environments for collaborative language agents☆111Updated last year