tensorchord / modelz-ChatGLMLinks
Deploy ChatGLM on Modelz
☆16Updated 2 years ago
Alternatives and similar repositories for modelz-ChatGLM
Users that are interested in modelz-ChatGLM are comparing it to the libraries listed below
Sorting:
- This repository contains statistics about the AI Infrastructure products.☆17Updated 9 months ago
- 本插件是将faiss集成到greenplum数据库中,以提供向量召回的能力。☆24Updated 3 years ago
- A memory efficient DLRM training solution using ColossalAI☆106Updated 3 years ago
- ☆35Updated 4 years ago
- Deploy and Monitor ML Model in Any Cloud☆39Updated 2 years ago
- Prompt 工程师利器,可同时比较多个 Prompts 在多个 LLM 模型上的效果☆97Updated 2 years ago
- Yet another coding assistant powered by LLM.☆16Updated last year
- Benchmark for machine learning model online serving (LLM, embedding, Stable-Diffusion, Whisper)☆28Updated 2 years ago
- setup the env for vllm users☆16Updated 2 years ago
- Some microbenchmarks and design docs before commencement☆12Updated 4 years ago
- 百度QA100万数据集☆47Updated 2 years ago
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Updated 8 months ago
- ☆25Updated 2 years ago
- Implemented a script that automatically adjusts Qwen3's inference and non-inference capabilities, based on an OpenAI-like API. The infere…☆22Updated 6 months ago
- Workflow Defined Engine☆25Updated last month
- Evaluation for AI apps and agent☆43Updated last year
- Service for Bert model to Vector. 高效的文本转向量(Text-To-Vector)服务,支持GPU多卡、多worker、多客户端调用,开箱即用。☆12Updated 3 years ago
- Gaokao Benchmark for AI☆109Updated 3 years ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆130Updated 2 months ago
- ☆31Updated 7 months ago
- ☆32Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated last year
- Byzer-retrieval is a distributed retrieval system which designed as a backend for LLM RAG (Retrieval Augmented Generation). The system su…☆49Updated 9 months ago
- ☆56Updated last year
- Fine-Tune LLM Synthetic-Data application and "From Data to AGI: Unlocking the Secrets of Large Language Model"☆16Updated last year
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆89Updated last year
- Autoscale LLM (vLLM, SGLang, LMDeploy) inferences on Kubernetes (and others)☆278Updated 2 years ago
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- ☆113Updated last year
- Sky Computing: Accelerating Geo-distributed Computing in Federated Learning