ethz-spylab / superhuman-ai-consistencyLinks
☆30Updated 2 years ago
Alternatives and similar repositories for superhuman-ai-consistency
Users that are interested in superhuman-ai-consistency are comparing it to the libraries listed below
Sorting:
- ☆27Updated 2 years ago
- Gemstones: A Model Suite for Multi-Faceted Scaling Laws (NeurIPS 2025)☆29Updated last week
- The repository contains code for Adaptive Data Optimization☆25Updated 10 months ago
- ☆15Updated last year
- ☆18Updated 10 months ago
- Is In-Context Learning Sufficient for Instruction Following in LLMs? [ICLR 2025]☆31Updated 8 months ago
- ☆19Updated 3 months ago
- ☆33Updated 9 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆59Updated 2 years ago
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- ☆78Updated 6 months ago
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆48Updated last year
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28Updated last year
- ☆20Updated last year
- Codebase for Inference-Time Policy Adapters☆24Updated last year
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated last year
- Codebase for Context-aware Meta-learned Loss Scaling (CaMeLS). https://arxiv.org/abs/2305.15076.☆25Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆92Updated 10 months ago
- ☆11Updated last year
- Public code release for the paper "Reawakening knowledge: Anticipatory recovery from catastrophic interference via structured training"☆10Updated 9 months ago
- ☆42Updated 2 years ago
- CodeUltraFeedback: aligning large language models to coding preferences (TOSEM 2025)☆71Updated last year
- ☆50Updated last year
- Sparse and discrete interpretability tool for neural networks☆63Updated last year
- Universal Neurons in GPT2 Language Models☆30Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆81Updated 10 months ago
- Code for "The Expressive Power of Low-Rank Adaptation".☆20Updated last year