NVIDIA-Merlin / distributed-embeddings
distributed-embeddings is a library for building large embedding based models in Tensorflow 2.
☆43Updated last year
Alternatives and similar repositories for distributed-embeddings:
Users that are interested in distributed-embeddings are comparing it to the libraries listed below
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆137Updated 3 weeks ago
- Python bindings for NVTX☆66Updated last year
- http://vlsiarch.eecs.harvard.edu/research/recommendation/☆132Updated 2 years ago
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆128Updated last week
- WholeGraph - large scale Graph Neural Networks☆101Updated 2 months ago
- NCCL Fast Socket is a transport layer plugin to improve NCCL collective communication performance on Google Cloud.☆114Updated last year
- Home for OctoML PyTorch Profiler☆107Updated last year
- Synthesizer for optimal collective communication algorithms☆102Updated 9 months ago
- A tensor-aware point-to-point communication primitive for machine learning☆252Updated 2 years ago
- Set of datasets for the deep learning recommendation model (DLRM).☆41Updated 2 years ago
- gossip: Efficient Communication Primitives for Multi-GPU Systems☆58Updated 2 years ago
- ☆44Updated last year
- Repository for SysML19 Artifacts Evaluation☆53Updated 5 years ago
- this is the release repository of superneurons☆52Updated 3 years ago
- ☆51Updated last year
- ☆141Updated this week
- Fine-grained GPU sharing primitives☆140Updated 4 years ago
- oneCCL Bindings for Pytorch*☆87Updated 3 weeks ago
- Enhanced networking support for TensorFlow. Maintained by SIG-networking.☆98Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆158Updated 9 months ago
- An Efficient Pipelined Data Parallel Approach for Training Large Model☆73Updated 4 years ago
- Convert nvprof profiles into about:tracing compatible JSON files☆68Updated 3 years ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- Microsoft Collective Communication Library☆61Updated 2 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆90Updated last year
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆193Updated 2 years ago
- ☆73Updated 3 years ago
- An analytical performance modeling tool for deep neural networks.☆88Updated 4 years ago
- ☆73Updated 2 years ago