star-whale / starwhale
an MLOps/LLMOps platform
☆208Updated last month
Related projects: ⓘ
- A Survey of AI startups☆391Updated last year
- Run your deep learning workloads on Kubernetes more easily and efficiently.☆498Updated 6 months ago
- ☆112Updated last month
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆291Updated 10 months ago
- Kubeflow helm chart☆136Updated last year
- OpenAIOS is an incubating open-source distributed OS kernel based on Kubernetes for AI workloads. OpenAIOS-Platform is an AI development…☆93Updated 3 years ago
- GLake: optimizing GPU memory management and IO transmission.☆351Updated last month
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆260Updated last year
- ☆158Updated this week
- ☆193Updated last year
- Using CRDs to manage GPU resources in Kubernetes.☆185Updated last year
- Easy, fast, and cheap pretrain,finetune, serving for everyone☆245Updated last week
- ☆97Updated 5 months ago
- Cloud Native ML/DL Platform☆127Updated 4 years ago
- Cloud-native way to provide elastic Jupyter Notebooks on Kubernetes. Run remote kernels, natively.☆193Updated 2 years ago
- One-click machine learning deployment (LLM, text-to-image and so on) at scale on any cluster (GCP, AWS, Lambda labs, your home lab, or ev…☆239Updated 10 months ago
- Docker for Your ML/DL Models Based on OCI Artifacts☆452Updated 7 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆1,045Updated last month
- Lepton Examples☆139Updated last month
- Heterogeneous AI Computing Virtualization Middleware☆658Updated this week
- kubeflow国内一键安装文件☆338Updated 2 years ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆512Updated last week
- LLM Inference benchmark☆331Updated last month
- A kubernetes plugin which enables dynamically add or remove GPU resources for a running Pod☆118Updated 2 years ago
- Elastic Deep Learning Training based on Kubernetes by Leveraging EDL and Volcano☆31Updated last year
- Chat to deploy and manage applications on any infrastructure☆126Updated 10 months ago
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆66Updated 4 months ago
- Large language model fine-tuning capabilities based on cloud native and distributed computing.☆87Updated 6 months ago
- FlagPerf is an open-source software platform for benchmarking AI chips.☆300Updated this week
- ☆251Updated last week