zorazrw / workflow-induction-toolkitLinks
A toolkit to induce interpretable workflows from raw computer-use activities.
☆35Updated 2 months ago
Alternatives and similar repositories for workflow-induction-toolkit
Users that are interested in workflow-induction-toolkit are comparing it to the libraries listed below
Sorting:
- ☆24Updated 10 months ago
- ReBase: Training Task Experts through Retrieval Based Distillation☆29Updated 11 months ago
- [EMNLP 2024] A Retrieval Benchmark for Scientific Literature Search☆102Updated last year
- Tree prompting: easy-to-use scikit-learn interface for improved prompting.☆41Updated 2 years ago
- Aligning with Human Judgement: The Role of Pairwise Preference in Large Language Model Evaluators (Liu et al.; COLM 2024)☆48Updated last year
- ☆129Updated last year
- [ACL 2024] <Large Language Models for Automated Open-domain Scientific Hypotheses Discovery>. It has also received the best poster award …☆42Updated last year
- Official Code Release for "Training a Generally Curious Agent"☆44Updated 8 months ago
- Learning to route instances for Human vs AI Feedback (ACL Main '25)☆26Updated 5 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆61Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆114Updated 5 months ago
- Aioli: A unified optimization framework for language model data mixing☆32Updated last year
- Middleware for LLMs: Tools Are Instrumental for Language Agents in Complex Environments (EMNLP'2024)☆37Updated last year
- ☆19Updated 5 months ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- ☆28Updated 2 months ago
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆101Updated 2 years ago
- ☆92Updated last month
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- ☆52Updated 7 months ago
- ☆29Updated 10 months ago
- ☆28Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Code/data for MARG (multi-agent review generation)☆59Updated 3 months ago
- The official repo for DARG: Dynamic Evaluation of Large Language Models via Adaptive Reasoning Graph☆18Updated last year
- This repository includes a benchmark and code for the paper "Evaluating LLMs at Detecting Errors in LLM Responses".☆30Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆25Updated 7 months ago