microsoft / stopLinks
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
☆48Updated last year
Alternatives and similar repositories for stop
Users that are interested in stop are comparing it to the libraries listed below
Sorting:
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆112Updated 3 months ago
- ☆129Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 11 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆151Updated 9 months ago
- ☆124Updated 9 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆60Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆233Updated 4 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 8 months ago
- Meta-CoT: Generalizable Chain-of-Thought Prompting in Mixed-task Scenarios with Large Language Models☆99Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Learning to Retrieve by Trying - Source code for Grounding by Trying: LLMs with Reinforcement Learning-Enhanced Retrieval☆51Updated last year
- ☆55Updated last year
- A framework for pitting LLMs against each other in an evolving library of games ⚔☆34Updated 7 months ago
- Advanced Reasoning Benchmark Dataset for LLMs☆47Updated 2 years ago
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆92Updated last month
- ☆126Updated last year
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆131Updated last year
- EMNLP 2024 "Re-reading improves reasoning in large language models". Simply repeating the question to get bidirectional understanding for…☆27Updated 11 months ago
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents☆131Updated last year
- augmented LLM with self reflection☆135Updated 2 years ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆116Updated last month
- ☆144Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆173Updated 10 months ago
- accompanying material for sleep-time compute paper☆117Updated 6 months ago
- Code for ExploreTom☆87Updated 4 months ago
- Code and Data for "Language Modeling with Editable External Knowledge"☆36Updated last year
- LILO: Library Induction with Language Observations☆88Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year