FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
☆1,543Mar 17, 2026Updated this week
Alternatives and similar repositories for FBGEMM
Users that are interested in FBGEMM are comparing it to the libraries listed below
Sorting:
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,442Updated this week
- Low-precision matrix multiplication☆1,832Jan 29, 2024Updated 2 years ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,549Aug 28, 2019Updated 6 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- Compiler for Neural Network hardware accelerators☆3,326May 11, 2024Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,003Sep 19, 2024Updated last year
- Development repository for the Triton language and compiler☆18,656Mar 14, 2026Updated last week
- oneAPI Deep Neural Network Library (oneDNN)☆3,964Updated this week
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- ☆1,995Jul 29, 2023Updated 2 years ago
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,052Mar 12, 2026Updated last week
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆932Updated this week
- Pytorch domain library for recommendation systems☆2,490Updated this week
- Open Machine Learning Compiler Framework☆13,197Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,077Apr 17, 2024Updated last year
- PyTorch extensions for high performance and large scale training.☆3,403Apr 26, 2025Updated 10 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,709Updated this week
- ☆321Feb 17, 2026Updated last month
- Library for specialized dense and sparse matrix operations, and deep learning primitives.☆945Feb 14, 2026Updated last month
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,770Mar 13, 2026Updated last week
- Collective communications library with various primitives for multi-machine training.☆1,405Mar 11, 2026Updated last week
- The Tensor Algebra SuperOptimizer for Deep Learning☆740Jan 26, 2023Updated 3 years ago
- Optimized primitives for collective multi-GPU communication☆4,531Updated this week
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,936Updated this week
- Tile primitives for speedy kernels☆3,232Updated this week
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,276Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,864Mar 12, 2026Updated last week
- Distributed Compiler based on Triton for Parallel Systems☆1,386Mar 11, 2026Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,730Mar 14, 2026Updated last week
- common in-memory tensor structure☆1,177Jan 26, 2026Updated last month
- A high performance and generic framework for distributed DNN training☆3,716Oct 3, 2023Updated 2 years ago
- A tensor-aware point-to-point communication primitive for machine learning☆284Dec 17, 2025Updated 3 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆921Dec 30, 2024Updated last year
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆197Feb 27, 2026Updated 3 weeks ago
- Enabling PyTorch on XLA Devices (e.g. Google TPU)☆2,756Dec 18, 2025Updated 3 months ago