OpenCSGs / llm-inferenceLinks
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
☆87Updated last year
Alternatives and similar repositories for llm-inference
Users that are interested in llm-inference are comparing it to the libraries listed below
Sorting:
- The framework of training large language models,support lora, full parameters fine tune etc, define yaml to start training/fine tune of y…☆30Updated last year
- ☆112Updated last year
- bisheng-unstructured library☆55Updated 5 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆315Updated 3 months ago
- LLM scheduler user interface☆18Updated last year
- Akcio is a demonstration project for Retrieval Augmented Generation (RAG). It leverages the power of LLM to generate responses and uses v…☆258Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆267Updated 3 months ago
- App-Controller: Allow users to manipulate your App with natural language☆131Updated 11 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆249Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆117Updated 3 weeks ago
- Efficient AI Inference & Serving☆477Updated last year
- ☆32Updated last year
- xllamacpp - a Python wrapper of llama.cpp☆64Updated this week
- GLM Series Edge Models☆153Updated 5 months ago
- run chatglm3-6b in BM1684X☆40Updated last year
- ☆29Updated last year
- [ACL2025 demo track] ROGRAG: A Robustly Optimized GraphRAG Framework☆179Updated last week
- 国产加速卡-海光DCU实战(大模型训练、微调、推理 等)☆55Updated 3 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆201Updated last month
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated last month
- This repository provides installation scripts and configuration files for deploying the CSGHub instance, includes Helm charts and Docker…☆16Updated this week
- Mixture-of-Experts (MoE) Language Model☆191Updated last year
- Index of the CodeFuse Repositories☆137Updated last year
- Omni_Infer is a suite of inference accelerators designed for the Ascend NPU platform, offering native support and an expanding feature se…☆84Updated this week
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Updated 7 months ago
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆76Updated last year
- bisheng model services backend☆32Updated last year
- ☆49Updated last month