microsoft / stopLinks
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
☆49Updated 2 years ago
Alternatives and similar repositories for stop
Users that are interested in stop are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆114Updated 6 months ago
- Repository for the paper Stream of Search: Learning to Search in Language☆152Updated 11 months ago
- ☆129Updated last year
- ☆123Updated 11 months ago
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Implementation of the paper: "AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?"☆69Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 10 months ago
- ☆105Updated last year
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆235Updated 6 months ago
- Advanced Reasoning Benchmark Dataset for LLMs☆47Updated 2 years ago
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Updated last year
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆29Updated last year
- accompanying material for sleep-time compute paper☆119Updated 9 months ago
- ☆144Updated last year
- A library for benchmarking the Long Term Memory and Continual learning capabilities of LLM based agents. With all the tests and code you…☆82Updated last year
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆71Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- A benchmark that challenges language models to code solutions for scientific problems☆168Updated this week
- This is work done by the Oxen.ai Community, trying to reproduce the Self-Rewarding Language Model paper from MetaAI.☆132Updated last year
- ☆41Updated last year
- Evaluation of neuro-symbolic engines☆41Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- ☆48Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆103Updated 5 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆120Updated 3 months ago
- ☆99Updated last year
- augmented LLM with self reflection☆136Updated 2 years ago
- ☆35Updated 8 months ago
- Evaluating LLMs with CommonGen-Lite☆93Updated last year