OpenCSGs / llm-inferenceLinks
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
☆90Updated last year
Alternatives and similar repositories for llm-inference
Users that are interested in llm-inference are comparing it to the libraries listed below
Sorting:
- The framework of training large language models,support lora, full parameters fine tune etc, define yaml to start training/fine tune of y…☆31Updated last year
- ☆113Updated last year
- This repository provides installation scripts and configuration files for deploying the CSGHub instance, includes Helm charts and Docker…☆17Updated last week
- bisheng-unstructured library☆56Updated 7 months ago
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆316Updated 5 months ago
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- 国产加速卡-海光DCU实战(大模型训练、微调、推理 等)☆60Updated 4 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆270Updated 4 months ago
- GLM Series Edge Models☆156Updated 6 months ago
- LLM scheduler user interface☆21Updated last year
- xllamacpp - a Python wrapper of llama.cpp☆66Updated last week
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆59Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated 2 months ago
- run chatglm3-6b in BM1684X☆40Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆212Updated 2 months ago
- Byzer-retrieval is a distributed retrieval system which designed as a backend for LLM RAG (Retrieval Augmented Generation). The system su…☆49Updated 9 months ago
- [ACL2025 demo track] ROGRAG: A Robustly Optimized GraphRAG Framework☆188Updated last week
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- Efficient AI Inference & Serving☆478Updated last year
- This is an NVIDIA AI Workbench example project that demonstrates an end-to-end model development workflow using Llamafactory.☆70Updated last year
- The CSGHub SDK is a powerful Python client specifically designed to interact seamlessly with the CSGHub server. This toolkit is engineere…☆22Updated last week
- App-Controller: Allow users to manipulate your App with natural language☆132Updated last year
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆79Updated last year
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- ☆68Updated this week
- ☆29Updated last year
- LLM 推理服务性能测试☆44Updated 2 years ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆61Updated last year