Just-Curieous / Curie
❓Curie: Automated and Rigorous Scientific Experimentation with AI Agents
☆86Updated this week
Alternatives and similar repositories for Curie
Users that are interested in Curie are comparing it to the libraries listed below
Sorting:
- Scaling Data for SWE-agents☆160Updated this week
- Systematic evaluation framework that automatically rates overthinking behavior in large language models.☆88Updated last month
- accompanying material for sleep-time compute paper☆82Updated 2 weeks ago
- ☆45Updated 10 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆149Updated last week
- Archon provides a modular framework for combining different inference-time techniques and LMs with just a JSON config file.☆172Updated 2 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆116Updated 5 months ago
- ☆54Updated this week
- [ICML 2025] Reward-guided Speculative Decoding (RSD) for efficiency and effectiveness.☆28Updated last week
- Official repository for R2E-Gym: Procedural Environment Generation and Hybrid Verifiers for Scaling Open-Weights SWE Agents☆67Updated 3 weeks ago
- Cascade Speculative Drafting☆29Updated last year
- ArcticTraining is a framework designed to simplify and accelerate the post-training process for large language models (LLMs)☆93Updated this week
- ☆27Updated 2 weeks ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆188Updated last week
- [ICLR 2024] Skeleton-of-Thought: Prompting LLMs for Efficient Parallel Generation☆167Updated last year
- Training an LLM to use a calculator with multi-turn reinforcement learning, achieving a **62% absolute increase in evaluation accuracy**.☆37Updated last week
- ☆65Updated 2 months ago
- LLM Serving Performance Evaluation Harness☆78Updated 2 months ago
- Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache☆99Updated 2 weeks ago
- Code for data-aware compression of DeepSeek models☆24Updated last month
- ☆176Updated 2 weeks ago
- EvaByte: Efficient Byte-level Language Models at Scale☆92Updated 3 weeks ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 5 months ago
- [ACL 2024] Do Large Language Models Latently Perform Multi-Hop Reasoning?☆65Updated last month
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆118Updated 11 months ago
- ☆37Updated 3 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆116Updated 10 months ago
- A lightweight framework for building research agents designed for developers☆80Updated this week
- A scalable asynchronous reinforcement learning implementation with in-flight weight updates.☆105Updated this week
- [OSDI'24] Serving LLM-based Applications Efficiently with Semantic Variable☆155Updated 7 months ago