ScalingIntelligence / ArchonLinks
Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.
☆173Updated 2 months ago
Alternatives and similar repositories for Archon
Users that are interested in Archon are comparing it to the libraries listed below
Sorting:
- ☆114Updated 3 months ago
- A Collection of Competitive Text-Based Games for Language Model Evaluation and Reinforcement Learning☆169Updated this week
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆201Updated 3 weeks ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆207Updated 3 weeks ago
- LOFT: A 1 Million+ Token Long-Context Benchmark☆198Updated last month
- Repository for the paper Stream of Search: Learning to Search in Language☆146Updated 4 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆171Updated 4 months ago
- A simple unified framework for evaluating LLMs☆215Updated last month
- ☆126Updated 2 months ago
- ☆92Updated 8 months ago
- Functional Benchmarks and the Reasoning Gap☆86Updated 8 months ago
- ☆113Updated 4 months ago
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆117Updated this week
- ☆86Updated 3 weeks ago
- Reproducible, flexible LLM evaluations☆204Updated 3 weeks ago
- Scaling Data for SWE-agents☆212Updated this week
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆121Updated 11 months ago
- ☆76Updated last month
- Official Repo for InSTA: Towards Internet-Scale Training For Agents☆42Updated this week
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆73Updated last month
- Open source interpretability artefacts for R1.☆140Updated last month
- Replicating O1 inference-time scaling laws☆87Updated 6 months ago
- SWE Arena☆33Updated last month
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated 9 months ago
- ☆41Updated 4 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆193Updated 6 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 11 months ago
- Train your own SOTA deductive reasoning model☆92Updated 2 months ago
- EvaByte: Efficient Byte-level Language Models at Scale☆98Updated last month
- Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆94Updated last month