ruyimarone / data-portraitsLinks
Documenting large text datasets πΌοΈ π
β14Updated last year
Alternatives and similar repositories for data-portraits
Users that are interested in data-portraits are comparing it to the libraries listed below
Sorting:
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"β71Updated last year
- CausalGym: Benchmarking causal interpretability methods on linguistic tasksβ51Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Leβ¦β99Updated 4 years ago
- β60Updated 2 years ago
- Repo for: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgmentβ38Updated 2 years ago
- Code for "Tracing Knowledge in Language Models Back to the Training Data"β39Updated 3 years ago
- Evaluate interpretability methods on localizing and disentangling concepts in LLMs.β57Updated 3 months ago
- [EMNLP'23] Execution-Based Evaluation for Open Domain Code Generationβ49Updated 2 years ago
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".β24Updated 10 months ago
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Caβ¦β60Updated 2 years ago
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.β25Updated 2 years ago
- β58Updated last year
- β19Updated 2 years ago
- β36Updated 3 years ago
- Align your LM to express calibrated verbal statements of confidence in its long-form generations.β29Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]β32Updated last year
- Landing page for MIB: A Mechanistic Interpretability Benchmarkβ24Updated 5 months ago
- β31Updated 2 years ago
- Official Repository for Dataset Inference for LLMsβ43Updated last year
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/β26Updated 10 months ago
- RΓΆttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"β126Updated 11 months ago
- This repository contains data, code and models for contextual noncompliance.β24Updated last year
- The LM Contamination Index is a manually created database of contamination evidences for LMs.β82Updated last year
- Resolving Knowledge Conflicts in Large Language Models, COLM 2024β18Updated 3 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".β80Updated last year
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evaluaβ¦β36Updated 2 years ago
- Code for our paper: "GrIPS: Gradient-free, Edit-based Instruction Search for Prompting Large Language Models"β57Updated 2 years ago
- β114Updated 3 years ago
- Few-shot Learning with Auxiliary Dataβ31Updated 2 years ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from eβ¦β28Updated last year