vtuber-plan / olahLinks
Self-hosted huggingface mirror service. 自建huggingface镜像服务。
☆208Updated 5 months ago
Alternatives and similar repositories for olah
Users that are interested in olah are comparing it to the libraries listed below
Sorting:
- Autoscale LLM (vLLM, SGLang, LMDeploy) inferences on Kubernetes (and others)☆278Updated 2 years ago
- LM inference server implementation based on *.cpp.☆294Updated 3 weeks ago
- A shim driver allows in-docker nvidia-smi showing correct process list without modify anything☆99Updated 5 months ago
- ☆530Updated 2 months ago
- ☆270Updated 3 weeks ago
- xet client tech, used in huggingface_hub☆356Updated this week
- A simple service that integrates vLLM with Ray Serve for fast and scalable LLM serving.☆78Updated last year
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆156Updated 3 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆220Updated 4 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- GPUd automates monitoring, diagnostics, and issue identification for GPUs☆464Updated this week
- Module, Model, and Tensor Serialization/Deserialization☆278Updated 3 months ago
- Comparison of Language Model Inference Engines☆237Updated last year
- A benchmarking tool for comparing different LLM API providers' DeepSeek model deployments.☆30Updated 8 months ago
- a huggingface mirror site.☆320Updated last year
- Practical GPU Sharing Without Memory Size Constraints☆296Updated 8 months ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆755Updated this week
- Open Source Text Embedding Models with OpenAI Compatible API☆164Updated last year
- ⚡️ 80x faster Fasttext language detection out of the box | Split text by language☆271Updated 3 months ago
- ClearML Fractional GPU - Run multiple containers on the same GPU with driver level memory limitation ✨ and compute time-slicing☆88Updated last month
- 🚢 Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! 🐫☆225Updated this week
- NVIDIA vGPU Device Manager manages NVIDIA vGPU devices on top of Kubernetes☆152Updated this week
- Unlock Unlimited Potential! Share Your GPU Power Across Your Local Network!☆72Updated 6 months ago
- ☆66Updated 8 months ago
- Inference server benchmarking tool☆130Updated 2 months ago
- The LLM API Benchmark Tool is a flexible Go-based utility designed to measure and analyze the performance of OpenAI-compatible API endpoi…☆59Updated last month
- GPU environment and cluster management with LLM support☆654Updated last year
- OpenAI compatible API for LLMs and embeddings (LLaMA, Vicuna, ChatGLM and many others)☆275Updated 2 years ago
- This is the documentation repository for SGLang. It is auto-generated from https://github.com/sgl-project/sglang/tree/main/docs.☆92Updated this week
- ☸️ Easy, advanced inference platform for large language models on Kubernetes. 🌟 Star to support our work!