ethz-spylab / superhuman-ai-consistencyLinks
☆30Updated 2 years ago
Alternatives and similar repositories for superhuman-ai-consistency
Users that are interested in superhuman-ai-consistency are comparing it to the libraries listed below
Sorting:
- ☆16Updated last year
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆30Updated 2 months ago
- ☆19Updated last year
- The repository contains code for Adaptive Data Optimization☆29Updated last year
- ☆27Updated 2 years ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆31Updated 10 months ago
- ☆20Updated 5 months ago
- Codebase for Inference-Time Policy Adapters☆24Updated 2 years ago
- ☆45Updated 2 years ago
- ☆33Updated 11 months ago
- Self-Supervised Alignment with Mutual Information☆20Updated last year
- Google Research☆46Updated 3 years ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆20Updated last month
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated last year
- ☆31Updated 2 years ago
- ☆20Updated last month
- ☆59Updated 2 years ago
- Official PyTorch implementation of "Neural Relation Graph: A Unified Framework for Identifying Label Noise and Outlier Data" (NeurIPS'23)☆15Updated 2 years ago
- ☆17Updated last year
- ☆44Updated 2 years ago
- ☆23Updated 10 months ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Updated 8 months ago
- Public code release for the paper "Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training"☆10Updated last month
- Sparse and discrete interpretability tool for neural networks☆64Updated last year
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- [ACL 2023]: Training Trajectories of Language Models Across Scales https://arxiv.org/pdf/2212.09803.pdf☆25Updated 2 years ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago