NVIDIA / recsys-examplesLinks
Examples for Recommenders - easy to train and deploy on accelerated infrastructure.
☆190Updated 2 weeks ago
Alternatives and similar repositories for recsys-examples
Users that are interested in recsys-examples are comparing it to the libraries listed below
Sorting:
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆159Updated last year
- A unified architecture deep learning framework designed specifically for ultra-large-scale sparse models.☆277Updated last month
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆185Updated last month
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,040Updated 3 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆330Updated last week
- tensorflow源码阅读笔记☆193Updated 7 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆462Updated 7 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆99Updated 2 years ago
- The road to hack SysML and become an system expert☆503Updated last year
- FlagGems is an operator library for large language models implemented in the Triton Language.☆803Updated last week
- GLake: optimizing GPU memory management and IO transmission.☆491Updated 9 months ago
- Zero Bubble Pipeline Parallelism☆442Updated 7 months ago
- Distributed Compiler based on Triton for Parallel Systems☆1,289Updated last week
- An easy-to-use framework for large scale recommendation algorithms.☆287Updated last week
- LLM training technologies developed by kwai☆67Updated last month
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆614Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆754Updated 8 months ago
- A flexible, high-performance framework for large-scale retrieval problems based on TensorFlow.☆169Updated 11 months ago
- PyTorch distributed training acceleration framework☆54Updated 4 months ago
- Materials for learning SGLang☆693Updated last week
- Running BERT without Padding☆476Updated 3 years ago
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆411Updated 6 months ago
- ☆332Updated this week
- ☆219Updated 2 years ago
- Examples of CUDA implementations by Cutlass CuTe☆263Updated 5 months ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆910Updated 11 months ago
- ☆56Updated 2 years ago
- ☆518Updated last month