NascentCore / 3kLinks
3-k platform is for training LLMs
☆14Updated last month
Alternatives and similar repositories for 3k
Users that are interested in 3k are comparing it to the libraries listed below
Sorting:
- Ubuntu kernels which are optimized for NVIDIA server systems☆39Updated this week
- Intelligent platform for AI workloads☆37Updated 2 years ago
- InfiniBand SR-IOV CNI☆13Updated this week
- Prometheus exporter for a Infiniband Fabric☆61Updated last year
- Fast and efficient attention method exploration and implementation.☆21Updated 3 months ago
- NVIDIA NCCL Tests for Distributed Training☆97Updated last week
- ☆29Updated 4 months ago
- ☆15Updated 2 weeks ago
- An HPC and Cloud Computing Fused Job Scheduling System☆104Updated this week
- 高性能计算系统性能评价工具集☆20Updated last year
- 国产加速卡-海光DCU实战(大模型训练、微调、推理 等)☆29Updated this week
- A diverse, simple, and secure all-in-one LLMOps platform☆105Updated 9 months ago
- Metastack: an enhanced and performance optimized version of Slurm☆52Updated last week
- ☆69Updated last week
- ☆32Updated 4 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆84Updated this week
- ☆52Updated 9 months ago
- Hooked CUDA-related dynamic libraries by using automated code generation tools.☆158Updated last year
- Golang bindings for Nvidia Datacenter GPU Manager (DCGM)☆118Updated 2 months ago
- The NVIDIA Driver Manager is a Kubernetes component which assist in seamless upgrades of NVIDIA Driver on each node of the cluster.☆35Updated last week
- Magnum IO community repo☆95Updated last month
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆246Updated 2 weeks ago
- This repository provides installation scripts and configuration files for deploying the CSGHub instance, includes Helm charts and Docker…☆16Updated this week
- ☆66Updated 5 months ago
- Device-plugin for volcano vgpu which support hard resource isolation☆91Updated last week
- llm-inference is a platform for publishing and managing llm inference, providing a wide range of out-of-the-box features for model deploy…☆82Updated last year
- Inference deployment of the llama3☆11Updated last year
- Intel® SHMEM - Device initiated shared memory based communication library☆24Updated 2 weeks ago
- Distributed KV cache coordinator☆36Updated this week
- ☆21Updated 5 months ago