PersiaML / PERSIALinks
High performance distributed framework for training deep learning recommendation models based on PyTorch.
☆407Updated this week
Alternatives and similar repositories for PERSIA
Users that are interested in PERSIA are comparing it to the libraries listed below
Sorting:
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,004Updated 2 months ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆158Updated last year
- Bagua Speeds up PyTorch☆883Updated 10 months ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆146Updated last week
- Running BERT without Padding☆471Updated 3 years ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆194Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- ☆53Updated last year
- Large batch training of CTR models based on DeepCTR with CowClip.☆169Updated 2 years ago
- ☆217Updated last year
- NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale da…☆1,084Updated 8 months ago
- ☆585Updated 7 years ago
- Simple Distributed Deep Learning on TensorFlow☆134Updated 2 years ago
- A tensor-aware point-to-point communication primitive for machine learning☆257Updated 2 years ago
- DeepRec is a high-performance recommendation deep learning framework based on TensorFlow. It is hosted in incubation in LF AI & Data Foun…☆1,101Updated 4 months ago
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆94Updated 2 years ago
- distributed-embeddings is a library for building large embedding based models in Tensorflow 2.☆44Updated last year
- ☆390Updated 2 years ago
- Dive into Deep Learning Compiler☆645Updated 2 years ago
- PyTorch On Angel, arming PyTorch with a powerful Parameter Server, which enable PyTorch to train very big models.☆168Updated 2 years ago
- A lightweight parameter server interface☆76Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆284Updated 3 years ago
- a TensorFlow-based distributed training framework optimized for large-scale sparse data.☆327Updated this week
- Resource-adaptive cluster scheduler for deep learning training.☆442Updated 2 years ago
- Fast and Adaptive Distributed Machine Learning for TensorFlow, PyTorch and MindSpore.☆294Updated last year
- PyTorch Library for Low-Latency, High-Throughput Graph Learning on GPUs.☆300Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆157Updated 5 months ago
- WholeGraph - large scale Graph Neural Networks☆105Updated 6 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆986Updated 8 months ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆307Updated last month