openshift-psap / llm-load-testLinks
☆51Updated 5 months ago
Alternatives and similar repositories for llm-load-test
Users that are interested in llm-load-test are comparing it to the libraries listed below
Sorting:
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆142Updated last week
- Repository to deploy LLMs with Multi-GPUs in distributed Kubernetes nodes☆29Updated last year
- This project makes running the InstructLab large language model (LLM) fine-tuning process easy and flexible on OpenShift☆27Updated 5 months ago
- Resources, demos, recipes,... to work with LLMs on OpenShift with OpenShift AI or Open Data Hub.☆146Updated 3 weeks ago
- Helm charts for llm-d☆52Updated 6 months ago
- Artifacts for the Distributed Workloads stack as part of ODH☆33Updated last week
- Caikit is an AI toolkit that enables users to manage models through a set of developer friendly APIs.☆112Updated 3 months ago
- llm-d benchmark scripts and tooling☆42Updated this week
- Collection of demos for building Llama Stack based apps on OpenShift☆58Updated 3 weeks ago
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆819Updated this week
- ☆17Updated last week
- Model Registry provides a single pane of glass for ML model developers to index and manage models, versions, and ML artifacts metadata. I…☆161Updated last week
- GenAI inference performance benchmarking tool☆141Updated this week
- ☆20Updated last week
- Auto-tuning for vllm. Getting the best performance out of your LLM deployment (vllm+guidellm+optuna)☆32Updated this week
- Taxonomy tree that will allow you to create models tuned with your data☆290Updated 4 months ago
- llm-d helm charts and deployment examples☆48Updated last month
- AI-on-OpenShift website source code☆101Updated last month
- Kubernetes enhancements for Network Topology Aware Gang Scheduling & Autoscaling☆155Updated last week
- NVIDIA DRA Driver for GPUs☆548Updated last week
- Examples for building and running LLM services and applications locally with Podman☆190Updated 5 months ago
- InstaSlice Operator facilitates slicing of accelerators using stable APIs☆49Updated this week
- Repository to demo GPU Sharing with Time Slicing, MPS, MIG and others☆56Updated last year
- Controller for ModelMesh☆242Updated 7 months ago
- Red Hat Enterprise Linux AI -- Developer Preview☆172Updated last year
- Models as a Service☆73Updated 3 months ago
- Test Orchestrator for Performance and Scalability of AI pLatforms☆16Updated this week
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,403Updated this week
- Improve ROSA customer experience (and customer retention) by leveraging foundation models to do “gpt-chat” style search of Red Hat custo…☆28Updated last year
- Model Server for Kepler☆29Updated 3 months ago