kpu / intgemmLinks
int8_t and int16_t matrix multiply based on https://arxiv.org/abs/1705.01991
☆73Updated last year
Alternatives and similar repositories for intgemm
Users that are interested in intgemm are comparing it to the libraries listed below
Sorting:
- Fast matrix multiplication for few-bit integer matrices on CPUs.☆29Updated 6 years ago
- Fast stand-alone C++ decoder for RNN-based NMT models☆26Updated 4 years ago
- Customized matrix multiplication kernels☆56Updated 3 years ago
- A GPU language model, based on btree backed tries.☆30Updated 7 years ago
- ☆312Updated 6 months ago
- A library of GPU kernels for sparse matrix operations.☆265Updated 4 years ago
- Clover: Quantized 4-bit Linear Algebra Library☆114Updated 7 years ago
- how to design cpu gemm on x86 with avx256, that can beat openblas.☆70Updated 6 years ago
- THIS REPOSITORY HAS MOVED TO github.com/nvidia/cub, WHICH IS AUTOMATICALLY MIRRORED HERE.☆84Updated last year
- Fast Neural Machine Translation in C++ - development repository☆273Updated 8 months ago
- Kernel Fusion and Runtime Compilation Based on NNVM☆70Updated 8 years ago
- Conversion to/from half-precision floating point formats☆354Updated 10 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆246Updated this week
- Codebase associated with the PyTorch compiler tutorial☆46Updated 5 years ago
- MatMul Performance Benchmarks for a Single CPU Core comparing both hand engineered and codegen kernels.☆133Updated last year
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- C99/C++ header-only library for division via fixed-point multiplication by inverse☆52Updated last year
- PyProf2: PyTorch Profiling tool☆82Updated 5 years ago
- Personal collection of references for high performance mixed precision training.☆41Updated 5 years ago
- Fast sparse deep learning on CPUs☆53Updated 2 years ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 11 months ago
- Intel® Optimization for Chainer*, a Chainer module providing numpy like API and DNN acceleration using MKL-DNN.☆171Updated last month
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆337Updated this week
- ☆69Updated 2 years ago
- oneCCL Bindings for Pytorch*☆97Updated 2 months ago
- Assembler for NVIDIA Volta and Turing GPUs☆222Updated 3 years ago
- PyTorch RFCs (experimental)☆132Updated last month
- Library for fast image convolution in neural networks on Intel Architecture☆29Updated 8 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago