OpenCSGs / llm-inferenceLinks
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
☆86Updated last year
Alternatives and similar repositories for llm-inference
Users that are interested in llm-inference are comparing it to the libraries listed below
Sorting:
- The framework of training large language models,support lora, full parameters fine tune etc, define yaml to start training/fine tune of y…☆30Updated last year
- ☆112Updated last year
- bisheng-unstructured library☆55Updated 5 months ago
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- LLM scheduler user interface☆18Updated last year
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆315Updated 3 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- Akcio is a demonstration project for Retrieval Augmented Generation (RAG). It leverages the power of LLM to generate responses and uses v…☆258Updated last year
- GLM Series Edge Models☆149Updated 4 months ago
- [ACL2025 demo track] ROGRAG: A Robustly Optimized GraphRAG Framework☆175Updated 3 weeks ago
- xllamacpp - a Python wrapper of llama.cpp☆60Updated last week
- Byzer-retrieval is a distributed retrieval system which designed as a backend for LLM RAG (Retrieval Augmented Generation). The system su…☆49Updated 7 months ago
- This is an NVIDIA AI Workbench example project that demonstrates an end-to-end model development workflow using Llamafactory.☆67Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆248Updated last year
- Its an open source LLM based on MOE Structure.☆58Updated last year
- bisheng model services backend☆30Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆131Updated last month
- ☆32Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆266Updated 2 months ago
- High-performance LLM inference based on our optimized version of FastTransfomer☆123Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 11 months ago
- App-Controller: Allow users to manipulate your App with natural language☆131Updated 11 months ago
- run chatglm3-6b in BM1684X☆40Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆114Updated last week
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆78Updated last year
- This repository provides installation scripts and configuration files for deploying the CSGHub instance, includes Helm charts and Docker…☆16Updated this week
- ☆29Updated last year
- LLM 推理服务性能测试☆43Updated last year
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆39Updated 10 months ago
- Mixture-of-Experts (MoE) Language Model☆189Updated last year