OpenCSGs / csghub-sdkLinks
The CSGHub SDK is a powerful Python client specifically designed to interact seamlessly with the CSGHub server. This toolkit is engineered to provide Python developers with an efficient and straightforward method to operate and manage remote CSGHub instances.
☆19Updated this week
Alternatives and similar repositories for csghub-sdk
Users that are interested in csghub-sdk are comparing it to the libraries listed below
Sorting:
- The framework of training large language models,support lora, full parameters fine tune etc, define yaml to start training/fine tune of y…☆30Updated last year
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆88Updated last year
- This repository provides installation scripts and configuration files for deploying the CSGHub instance, includes Helm charts and Docker…☆16Updated this week
- LLM scheduler user interface☆20Updated last year
- Inference code of Lingma SWE-GPT☆251Updated 11 months ago
- An open platform for enhancing the capability of LLMs in workflow orchestration.☆180Updated 8 months ago
- ☆54Updated 4 months ago
- AutoHub: A Personal Browser Automation Assistant☆24Updated 3 months ago
- CodeLLaMA 中文版 - 代码生成助手,huggingface累积下载2w+次☆45Updated 2 years ago
- ☆160Updated last year
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆315Updated 4 months ago
- Multi-Faceted AI Agent and Workflow Autotuning. Automatically optimizes LangChain, LangGraph, DSPy programs for better quality, lower exe…☆263Updated 6 months ago
- 🍎APPL: A Prompt Programming Language. Seamlessly integrate LLMs with programs.☆264Updated 9 months ago
- The evaluation benchmark on MCP servers☆225Updated 2 months ago
- Official implementation of paper How to Understand Whole Repository? New SOTA on SWE-bench Lite (21.3%)☆95Updated 7 months ago
- ☆112Updated last year
- A minimalist benchmarking tool designed to test the routine-generation capabilities of LLMs.☆27Updated 11 months ago
- Industrial-level evaluation benchmarks for Coding LLMs in the full life-cycle of AI native software developing.企业级代码大模型评测体系,持续开放中☆103Updated 6 months ago
- [ACL 2025] Graph-guided agentic framework for code localization https://arxiv.org/abs/2503.09089☆544Updated 3 months ago
- ☆102Updated last year
- ☆182Updated 2 weeks ago
- 基于 Dify + Langfuse 的自动化评估服务☆85Updated 5 months ago
- [ICLR 2025] The official implementation of paper "ToolGen: Unified Tool Retrieval and Calling via Generation"☆162Updated 7 months ago
- Official repository for our paper "FullStack Bench: Evaluating LLMs as Full Stack Coders"☆107Updated 6 months ago
- MoonPalace(月宫)是由 Moonshot AI 月之暗面提供的 API 调试工具。☆220Updated 10 months ago
- 代码大模型 预训练&微调&DPO 数据处理 业界处理pipeline sota☆45Updated last year
- ☆14Updated 7 months ago
- 🦀️ CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents. https://crab.camel-ai.org/☆381Updated 4 months ago
- The next generation of Multi-Modal Multi-Agent platform.☆108Updated 6 months ago
- DeepSolution: Boosting Complex Engineering Solution Design via Tree-based Exploration and Bi-point Thinking☆49Updated 8 months ago