ScalingIntelligence / codemonkeysLinks
☆59Updated last year
Alternatives and similar repositories for codemonkeys
Users that are interested in codemonkeys are comparing it to the libraries listed below
Sorting:
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆189Updated 10 months ago
- ☆137Updated 10 months ago
- accompanying material for sleep-time compute paper☆119Updated 9 months ago
- ☆131Updated 8 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆175Updated last year
- Train your own SOTA deductive reasoning model☆107Updated 11 months ago
- ☆132Updated 8 months ago
- Storing long contexts in tiny caches with self-study☆233Updated 2 months ago
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆65Updated 9 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆140Updated 5 months ago
- Small, simple agent task environments for training and evaluation☆19Updated last year
- Official repo for Learning to Reason for Long-Form Story Generation☆74Updated 9 months ago
- ☆67Updated 8 months ago
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆96Updated 8 months ago
- j1-micro (1.7B) & j1-nano (600M) are absurdly tiny but mighty reward models.☆102Updated 6 months ago
- ☆123Updated 11 months ago
- [NeurIPS 2025 D&B Spotlight] Scaling Data for SWE-agents☆538Updated this week
- Streamline on-policy/off-policy distillation workflows in a few lines of code☆94Updated last week
- ☆91Updated last month
- Matrix (Multi-Agent daTa geneRation Infra and eXperimentation framework) is a versatile engine for multi-agent conversational data genera…☆261Updated this week
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- QAlign is a new test-time alignment approach that improves language model performance by using Markov chain Monte Carlo methods.☆26Updated last month
- ☆61Updated 7 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆88Updated 10 months ago
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated last year
- Accelerating your LLM training to full speed! Made with ❤️ by ServiceNow Research☆287Updated this week
- EvaByte: Efficient Byte-level Language Models at Scale☆115Updated 9 months ago
- A fast, local, and secure approach for training LLMs for coding tasks using GRPO with WebAssembly and interpreter feedback.☆41Updated 10 months ago
- ☆134Updated 4 months ago
- ☆56Updated last year