GoogleCloudPlatform / slurm-gcp
☆29Updated this week
Related projects ⓘ
Alternatives and complementary repositories for slurm-gcp
- Cluster Toolkit is an open-source software offered by Google Cloud which makes it easy for customers to deploy AI/ML and HPC environments…☆212Updated this week
- xpk (Accelerated Processing Kit, pronounced x-p-k,) is a software tool to help Cloud developers to orchestrate training jobs on accelerat…☆81Updated this week
- ☆38Updated 2 months ago
- GPU Environment Management for Visual Studio Code☆35Updated last year
- An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.☆57Updated this week
- Deploy a Flux MiniCluster to Kubernetes with the operator☆31Updated 2 weeks ago
- Singularity Image Format (SIF) reference implementation.☆17Updated last week
- Testing if I can implement slurm in an operator☆11Updated 2 weeks ago
- A do-framework project to simplify deployment of Kubeflow on Amazon EKS☆18Updated 7 months ago
- Holodeck is a project to create test environments optimised for GPU projects.☆9Updated this week
- Slurm on Google Cloud Platform☆181Updated 2 months ago
- A simplified and automated orchestration workflow to perform ML end-to-end (E2E) model tests and benchmarking on Cloud VMs across differe…☆27Updated this week
- Azure CycleCloud project to enable users to create, configure, and use Slurm HPC clusters.☆59Updated 2 weeks ago
- ☆122Updated this week
- A collection of YAML files, Helm Charts, Operator code, and guides to act as an example reference implementation for NVIDIA NIM deploymen…☆139Updated this week
- Documentation repository for NVIDIA Cloud Native Technologies☆17Updated this week
- EFA/NCCL base AMI build Packer and CodeBuild/Pipeline files. Also base Docker build files to enable EFA/NCCL in containers☆41Updated last year
- PyTorch/XLA integration with JetStream (https://github.com/google/JetStream) for LLM inference"☆40Updated last week
- Real-time visualisation☆15Updated 4 months ago
- Deploy your HPC Cluster on AWS in 20min. with just 1-Click.☆63Updated 9 months ago
- A stand-alone implementation of several NumPy dtype extensions used in machine learning.☆215Updated this week
- Triton Server Component for lightning.ai☆14Updated last year
- Kubernetes Operator, ansible playbooks, and production scripts for large-scale AIStore deployments on Kubernetes.☆78Updated this week
- Runner in charge of collecting metrics from LLM inference endpoints for the Unify Hub☆17Updated 9 months ago
- ☆20Updated 6 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs wel…☆235Updated this week
- First token cutoff sampling inference example☆28Updated 10 months ago
- This repository hosts code that supports the testing infrastructure for the PyTorch organization. For example, this repo hosts the logic …☆83Updated this week
- A collection of useful Go libraries to ease the development of NVIDIA Operators for GPU/NIC management.☆18Updated last week
- A dummy's guide to setting up (and using) HPC clusters on Ubuntu 22.04LTS using Slurm and Munge. Created by the Quant Club @ UIowa.☆219Updated 7 months ago