PersiaML / PERSIALinks
High performance distributed framework for training deep learning recommendation models based on PyTorch.
☆411Updated 7 months ago
Alternatives and similar repositories for PERSIA
Users that are interested in PERSIA are comparing it to the libraries listed below
Sorting:
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,041Updated 4 months ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆160Updated last year
- ☆56Updated 2 years ago
- Bagua Speeds up PyTorch☆884Updated last year
- Large batch training of CTR models based on DeepCTR with CowClip.☆172Updated 2 years ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆189Updated 2 months ago
- ☆219Updated 2 years ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆194Updated 3 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 7 months ago
- Running BERT without Padding☆476Updated 3 years ago
- DeepRec is a high-performance recommendation deep learning framework based on TensorFlow. It is hosted in incubation in LF AI & Data Foun…☆1,162Updated last year
- distributed-embeddings is a library for building large embedding based models in Tensorflow 2.☆46Updated 2 years ago
- PyTorch On Angel, arming PyTorch with a powerful Parameter Server, which enable PyTorch to train very big models.☆169Updated 3 months ago
- NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale da…☆1,135Updated 3 months ago
- Examples for Recommenders - easy to train and deploy on accelerated infrastructure.☆211Updated last week
- ☆391Updated 3 years ago
- a TensorFlow-based distributed training framework optimized for large-scale sparse data.☆333Updated last month
- Fast and Adaptive Distributed Machine Learning for TensorFlow, PyTorch and MindSpore.☆296Updated last year
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago
- deepx_core是一个专注于张量计算/深度学习的基础库☆380Updated 9 months ago
- A tensor-aware point-to-point communication primitive for machine learning☆283Updated last month
- A flexible, high-performance framework for large-scale retrieval problems based on TensorFlow.☆170Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆332Updated last month
- A memory efficient DLRM training solution using ColossalAI☆105Updated 3 years ago
- ☆600Updated 7 years ago
- Resource-adaptive cluster scheduler for deep learning training.☆451Updated 2 years ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- http://vlsiarch.eecs.harvard.edu/research/recommendation/☆134Updated 3 years ago
- PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.☆301Updated 2 years ago