Data-Provenance-Initiative / Data-Provenance-CollectionLinks
☆234Updated 2 months ago
Alternatives and similar repositories for Data-Provenance-Collection
Users that are interested in Data-Provenance-Collection are comparing it to the libraries listed below
Sorting:
- ☆266Updated 4 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆128Updated last year
- Let's build better datasets, together!☆259Updated 5 months ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆258Updated 10 months ago
- awesome synthetic (text) datasets☆281Updated 7 months ago
- Evaluating LLMs with fewer examples☆155Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆105Updated 8 months ago
- What's In My Big Data (WIMBD) - a toolkit for analyzing large text datasets☆221Updated 6 months ago
- PAIR.withgoogle.com and friend's work on interpretability methods☆191Updated last month
- datasets from the paper "Towards Understanding Sycophancy in Language Models"☆76Updated last year
- Functional Benchmarks and the Reasoning Gap☆86Updated 8 months ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 weeks ago
- ☆95Updated last year
- A toolkit for describing model features and intervening on those features to steer behavior.☆185Updated 6 months ago
- ☆96Updated 3 months ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago
- ☆58Updated 2 weeks ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- ☆131Updated 2 months ago
- ☆106Updated last year
- visualizing attention for LLM users☆212Updated 5 months ago
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 6 months ago
- Evaluating LLMs with CommonGen-Lite☆90Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 8 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆92Updated this week
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆200Updated 5 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆239Updated 3 months ago
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆302Updated last year
- The Foundation Model Transparency Index☆79Updated last year
- ☆69Updated last year