ApolloResearch / insider-tradingLinks
☆50Updated last year
Alternatives and similar repositories for insider-trading
Users that are interested in insider-trading are comparing it to the libraries listed below
Sorting:
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆70Updated 11 months ago
- Learning to route instances for Human vs AI Feedback (ACL 2025 Main)☆23Updated 3 weeks ago
- Functional Benchmarks and the Reasoning Gap☆86Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆57Updated 9 months ago
- ☆54Updated 8 months ago
- ☆76Updated last month
- ☆49Updated 7 months ago
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Code for reproducing our paper "Not All Language Model Features Are Linear"☆75Updated 6 months ago
- Code for "Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs"☆50Updated 3 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆26Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 10 months ago
- ☆58Updated 3 weeks ago
- Code and Data Repo for the CoNLL Paper -- Future Lens: Anticipating Subsequent Tokens from a Single Hidden State☆18Updated last year
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆24Updated last month
- ☆69Updated last year
- ☆114Updated 3 months ago
- Evaluating LLMs with fewer examples☆155Updated last year
- ☆131Updated 2 months ago
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆107Updated last year
- ☆36Updated 7 months ago
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- An attribution library for LLMs☆41Updated 8 months ago
- ☆27Updated last month
- Evaluation of neuro-symbolic engines☆35Updated 10 months ago
- Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.☆29Updated last month
- A collection of different ways to implement accessing and modifying internal model activations for LLMs☆16Updated 7 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆76Updated last year
- ☆21Updated 8 months ago