OpenCSGs / llm-inferenceLinks
llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deployment, such as UI, RESTful API, auto-scaling, computing resource management, monitoring, and more.
☆86Updated last year
Alternatives and similar repositories for llm-inference
Users that are interested in llm-inference are comparing it to the libraries listed below
Sorting:
- The framework of training large language models,support lora, full parameters fine tune etc, define yaml to start training/fine tune of y…☆31Updated last year
- ☆112Updated last year
- bisheng-unstructured library☆55Updated 4 months ago
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- LLM scheduler user interface☆18Updated last year
- AGI模块库架构图☆77Updated 2 years ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆39Updated last year
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆248Updated last year
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆315Updated 2 months ago
- run chatglm3-6b in BM1684X☆40Updated last year
- fastertransformer for codegeex model☆65Updated 2 years ago
- xllamacpp - a Python wrapper of llama.cpp☆59Updated 2 weeks ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆265Updated last month
- 国产加速卡-海光DCU实战(大模型训练、微调、推理 等)☆48Updated last month
- [ACL2025 demo track] ROGRAG: A Robustly Optimized GraphRAG Framework☆175Updated last week
- Its an open source LLM based on MOE Structure.☆58Updated last year
- Byzer-retrieval is a distributed retrieval system which designed as a backend for LLM RAG (Retrieval Augmented Generation). The system su…☆49Updated 6 months ago
- App-Controller: Allow users to manipulate your App with natural language☆132Updated 10 months ago
- A Toolkit for Running On-device Large Language Models (LLMs) in APP☆80Updated last year
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆134Updated this week
- High-performance LLM inference based on our optimized version of FastTransfomer☆124Updated last year
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 10 months ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆29Updated 6 months ago
- OpenLLaMA-Chinese, a permissively licensed open source instruction-following models based on OpenLLaMA☆66Updated 2 years ago
- bisheng model services backend☆31Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆197Updated last week
- ☆29Updated last year
- Efficient AI Inference & Serving☆477Updated last year
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆132Updated last week