NVIDIA / recsys-examplesLinks
Examples for Recommenders - easy to train and deploy on accelerated infrastructure.
☆51Updated last week
Alternatives and similar repositories for recsys-examples
Users that are interested in recsys-examples are comparing it to the libraries listed below
Sorting:
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆149Updated this week
- Zero Bubble Pipeline Parallelism☆398Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆127Updated 5 months ago
- ☆96Updated 9 months ago
- ☆212Updated 11 months ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆158Updated last year
- ☆139Updated last year
- ☆148Updated 5 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- ☆127Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- Yinghan's Code Sample☆330Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆474Updated last year
- ☆135Updated last year
- Examples of CUDA implementations by Cutlass CuTe☆195Updated 4 months ago
- ☆91Updated 5 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- Running BERT without Padding☆471Updated 3 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- An industrial extension library of pytorch to accelerate large scale model training☆37Updated last month
- Transformer related optimization, including BERT, GPT☆59Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆377Updated last month
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆519Updated 3 weeks ago
- A Easy-to-understand TensorOp Matmul Tutorial☆364Updated 9 months ago
- ☆97Updated 2 months ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆144Updated 2 years ago
- Distributed Compiler Based on Triton for Parallel Systems☆829Updated this week
- ☆115Updated last month
- A simple high performance CUDA GEMM implementation.☆380Updated last year