facebookresearch / dlrm_datasetsLinks
Set of datasets for the deep learning recommendation model (DLRM).
☆47Updated 2 years ago
Alternatives and similar repositories for dlrm_datasets
Users that are interested in dlrm_datasets are comparing it to the libraries listed below
Sorting:
- Accelerating Recommender model training by leveraging popular choices -- VLDB 2022☆31Updated last year
- Graphiler is a compiler stack built on top of DGL and TorchScript which compiles GNNs defined using user-defined functions (UDFs) into ef…☆59Updated 3 years ago
- Artifact for OSDI'23: MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Mult…☆40Updated last year
- http://vlsiarch.eecs.harvard.edu/research/recommendation/☆135Updated 3 years ago
- ☆112Updated 4 years ago
- ☆70Updated 4 years ago
- Artifact for PPoPP22 QGTC: Accelerating Quantized GNN via GPU Tensor Core.☆30Updated 3 years ago
- Artifact for PPoPP20 "Understanding and Bridging the Gaps in Current GNN Performance Optimizations"☆40Updated 4 years ago
- ☆41Updated 5 years ago
- Distributed Multi-GPU GNN Framework☆36Updated 5 years ago
- ☆83Updated 3 years ago
- Artifact for USENIX ATC'23: TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs.☆51Updated 2 years ago
- AlpaServe: Statistical Multiplexing with Model Parallelism for Deep Learning Serving (OSDI 23)☆91Updated 2 years ago
- Dorylus: Affordable, Scalable, and Accurate GNN Training☆76Updated 4 years ago
- SoCC'20 and TPDS'21: Scaling GNN Training on Large Graphs via Computation-aware Caching and Partitioning.☆51Updated 2 years ago
- [ICLR 2022] "PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication" by Cheng Wan, Y…☆33Updated 2 years ago
- Artifact evaluation of the paper "Accelerating Training and Inference of Graph Neural Networks with Fast Sampling and Pipelining"☆23Updated 3 years ago
- A schedule language for large model training☆151Updated 3 months ago
- Artifact for OSDI'21 GNNAdvisor: An Adaptive and Efficient Runtime System for GNN Acceleration on GPUs.☆68Updated 2 years ago
- ☆77Updated 4 years ago
- Magicube is a high-performance library for quantized sparse matrix operations (SpMM and SDDMM) of deep learning on Tensor Cores.☆90Updated 3 years ago
- Synthesizer for optimal collective communication algorithms☆121Updated last year
- Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB)☆44Updated 2 years ago
- ☆31Updated last year
- [MLSys 2022] "BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node …☆56Updated 2 years ago
- ☆159Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆141Updated 2 years ago
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆68Updated 8 months ago
- ☆23Updated 3 months ago
- Ginex: SSD-enabled Billion-scale Graph Neural Network Training on a Single Machine via Provably Optimal In-memory Caching☆42Updated last year