GreenWaves-Technologies / bfloat16Links
bfloat16 dtype for numpy
☆19Updated 2 years ago
Alternatives and similar repositories for bfloat16
Users that are interested in bfloat16 are comparing it to the libraries listed below
Sorting:
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆111Updated 10 months ago
- ☆158Updated 2 years ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆161Updated this week
- The official, proof-of-concept C++ implementation of PocketNN.☆35Updated 3 weeks ago
- A Data-Centric Compiler for Machine Learning☆85Updated last year
- Sandbox for TVM and playing around!☆22Updated 2 years ago
- Converting a deep neural network to integer-only inference in native C via uniform quantization and the fixed-point representation.☆25Updated 3 years ago
- ☆11Updated 4 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated 2 months ago
- Fast sparse deep learning on CPUs☆56Updated 3 years ago
- ☆162Updated 2 years ago
- ☆50Updated last year
- ☆72Updated 6 months ago
- Trying to find out what is the minimal model that can achieve 99% accuracy on MNIST dataset☆27Updated 7 years ago
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆41Updated last year
- Benchmarking PyTorch 2.0 different models☆20Updated 2 years ago
- Customized matrix multiplication kernels☆57Updated 3 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- A tiny FP8 multiplication unit written in Verilog. TinyTapeout 2 submission.☆14Updated 2 years ago
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆87Updated 2 years ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- The Riallto Open Source Project from AMD☆84Updated 6 months ago
- Machine Learning Agility (MLAgility) benchmark and benchmarking tools☆40Updated 2 months ago
- QuIP quantization☆59Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 3 months ago
- A lightweight, Pythonic, frontend for MLIR☆80Updated last year
- ☆68Updated 2 years ago
- GPTQ inference TVM kernel☆39Updated last year
- ☆47Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month