continuedev / instinctLinks
The state-of-the-art open Next Edit model, built by Continue
☆48Updated 5 months ago
Alternatives and similar repositories for instinct
Users that are interested in instinct are comparing it to the libraries listed below
Sorting:
- Community-led collection of essential ast-grep rules.☆123Updated 6 months ago
- Coding problems used in aider's polyglot benchmark☆199Updated last year
- Split code into semantic chunks☆58Updated last year
- RAG on codebases using treesitter and LanceDB☆276Updated last year
- ASTChunk is a Python toolkit for code chunking using Abstract Syntax Trees (ASTs), designed to create structurally sound and meaningful c…☆144Updated 7 months ago
- The LLM abstraction layer for modern AI agent applications.☆504Updated this week
- Code for the paper "Coding Agents with Multimodal Browsing are Generalist Problem Solvers"☆97Updated 3 months ago
- Python SDK for ACP clients and agents.☆143Updated last week
- An Agentic Deep Research Assistant similar to Gemini and OpenAI Deep Research☆132Updated 11 months ago
- Rust executable for Refact Agent, it lives inside your IDE and keeps AST and VecDB indexes up to date, offers agentic tools for an AI mod…☆67Updated 11 months ago
- Enhancing AI Software Engineering with Repository-level Code Graph☆248Updated 10 months ago
- ☆56Updated 10 months ago
- An LLM-powered (CodeLlama or OpenAI) local diff code review tool.☆41Updated last year
- ☆392Updated 4 months ago
- A high-performance Rust implementation of an OpenAI-compatible API gateway for Claude Code CLI.☆96Updated last month
- Verify Precision of all Kimi K2 API Vendor☆507Updated last week
- LLM-as-SERP☆68Updated 11 months ago
- Extract and compare system prompts and tools from different Claude Code versions☆182Updated 3 months ago
- Harness used to benchmark aider against SWE Bench benchmarks☆79Updated last year
- AI benchmark runtime framework that allows you to integrate and evaluate AI tasks using Docker-based benchmarks.☆178Updated last month
- DevQualityEval: An evaluation benchmark 📈 and framework to compare and evolve the quality of code generation of LLMs.☆185Updated 8 months ago
- Self-hosted alternative to OpenAI's Responses API compatible with Agents SDK and works with all model providers (Claude/R1/Qwen/Ollama et…☆175Updated 10 months ago
- The ACP implementation for Claude Code☆235Updated 4 months ago
- Multi-language code navigation API in a container☆100Updated 5 months ago
- Sandboxed code execution for AI agents, locally or on the cloud. Massively parallel, easy to extend. Powering SWE-agent and more.☆424Updated last week
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆95Updated 10 months ago
- proof-of-concept of Cursor's Instant Apply feature☆88Updated last year
- Agent computer interface for AI software engineer.☆115Updated last month
- The evaluation benchmark on MCP servers☆238Updated 5 months ago
- A powerful Python framework for orchestrating AI agents and managing complex LLM-driven tasks with ease.☆93Updated 6 months ago