nvwacloud / tensorlinkLinks
Unlock Unlimited Potential! Share Your GPU Power Across Your Local Network!
☆72Updated 7 months ago
Alternatives and similar repositories for tensorlink
Users that are interested in tensorlink are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆295Updated last month
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆223Updated this week
- Self-hosted huggingface mirror service. 自建huggingface镜像服务。☆211Updated 5 months ago
- Implementation of remote CUDA/OpenCL protocol☆38Updated 7 months ago
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆187Updated 2 weeks ago
- Autoscale LLM (vLLM, SGLang, LMDeploy) inferences on Kubernetes (and others)☆279Updated 2 years ago
- Open Source Text Embedding Models with OpenAI Compatible API☆165Updated last year
- xllamacpp - a Python wrapper of llama.cpp☆68Updated last week
- Comparison of Language Model Inference Engines☆238Updated last year
- Download models from the Ollama library, without Ollama☆119Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- Easier than K8s to lift and lower the gpu number of docker container and scale capacity size of volume.☆81Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated last year
- C++ implementation of Qwen-LM☆614Updated last year
- ☆114Updated last year
- A shim driver allows in-docker nvidia-smi showing correct process list without modify anything☆100Updated 6 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆259Updated 10 months ago
- A diverse, simple, and secure all-in-one LLMOps platform☆109Updated last year
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!☆284Updated 3 weeks ago
- LLM inference in C/C++☆21Updated 9 months ago
- 支持中文场景的的小语言模型 llama2.c-zh☆150Updated last year
- OpenAIOS vGPU device plugin for Kubernetes is originated from the OpenAIOS project to virtualize GPU device memory, in order to allow app…☆582Updated last year
- vLLM Router☆54Updated last year
- Practical GPU Sharing Without Memory Size Constraints☆296Updated 9 months ago
- A high-performance inference system for large language models, designed for production environments.☆489Updated 3 weeks ago
- a huggingface mirror site.☆324Updated last year
- LLM Inference benchmark☆430Updated last year
- 配合 HAI Platform 使用的集成化用户界面☆53Updated 2 years ago
- an MLOps/LLMOps platform☆234Updated last year