NVIDIA-Merlin / HierarchicalKV
HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of HierarchicalKV is to store key-value feature-embeddings on high-bandwidth memory (HBM) of GPUs and in host memory. It also can be used as a generic key-value storage.
☆140Updated 3 weeks ago
Alternatives and similar repositories for HierarchicalKV:
Users that are interested in HierarchicalKV are comparing it to the libraries listed below
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆157Updated 11 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆92Updated last year
- PyTorch distributed training acceleration framework☆44Updated last month
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆116Updated last year
- Paella: Low-latency Model Serving with Virtualized GPU Scheduling☆58Updated 10 months ago
- ☆51Updated last year
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆311Updated this week
- ☆145Updated 2 months ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆58Updated 2 years ago
- PET: Optimizing Tensor Programs with Partially Equivalent Transformations and Automated Corrections☆119Updated 2 years ago
- We invite you to visit and follow our new repository at https://github.com/microsoft/TileFusion. TiledCUDA is a highly efficient kernel …☆178Updated last month
- ☆75Updated 2 years ago
- A home for the final text of all TVM RFCs.☆103Updated 5 months ago
- ☆87Updated last week
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆74Updated 4 years ago
- NCCL Profiling Kit☆127Updated 8 months ago
- ☆32Updated last year
- ☆36Updated 3 months ago
- Microsoft Collective Communication Library☆343Updated last year
- distributed-embeddings is a library for building large embedding based models in Tensorflow 2.☆43Updated last year
- Shared Middle-Layer for Triton Compilation☆232Updated last week
- Dynamic Memory Management for Serving LLMs without PagedAttention☆317Updated this week
- ☆141Updated last month
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆78Updated 4 months ago
- ☆55Updated 2 months ago
- ☆194Updated last year
- ☆191Updated 8 months ago
- ☆87Updated 6 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆143Updated 2 years ago