pytorch / FBGEMM
FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/
☆1,210Updated this week
Related projects ⓘ
Alternatives and complementary repositories for FBGEMM
- The Tensor Algebra SuperOptimizer for Deep Learning☆692Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆962Updated 2 months ago
- Collective communications library with various primitives for multi-machine training.☆1,227Updated this week
- common in-memory tensor structure☆912Updated last month
- A performant and modular runtime for TensorFlow☆756Updated last month
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,355Updated this week
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆816Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,011Updated 7 months ago
- TVM integration into PyTorch☆452Updated 4 years ago
- Low-precision matrix multiplication☆1,780Updated 9 months ago
- Dive into Deep Learning Compiler☆643Updated 2 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,979Updated this week
- Running BERT without Padding☆460Updated 2 years ago
- A GPU performance profiling tool for PyTorch models☆495Updated 3 years ago
- Backward compatible ML compute opset inspired by HLO/MHLO☆412Updated this week
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆734Updated this week
- A tensor-aware point-to-point communication primitive for machine learning☆249Updated last year
- ☆399Updated this week
- A GPipe implementation in PyTorch☆818Updated 3 months ago
- Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators☆1,528Updated 5 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆1,885Updated this week
- ☆378Updated 2 years ago
- Reference implementations of MLPerf™ inference benchmarks☆1,238Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆767Updated this week
- row-major matmul optimization☆591Updated last year
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆946Updated last month
- Efficient GPU kernels for block-sparse matrix multiplication and convolution☆1,027Updated last year
- Library for specialized dense and sparse matrix operations, and deep learning primitives.☆850Updated this week
- [ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl☆1,684Updated last year
- FlexFlow Serve: Low-Latency, High-Performance LLM Serving☆1,713Updated this week