GreenWaves-Technologies / bfloat16Links
bfloat16 dtype for numpy
☆19Updated 2 years ago
Alternatives and similar repositories for bfloat16
Users that are interested in bfloat16 are comparing it to the libraries listed below
Sorting:
- ☆159Updated 2 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆111Updated 9 months ago
- A tiny FP8 multiplication unit written in Verilog. TinyTapeout 2 submission.☆14Updated 2 years ago
- ☆159Updated 2 years ago
- QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX☆161Updated this week
- A Python library transfers PyTorch tensors between CPU and NVMe☆121Updated 10 months ago
- llama INT4 cuda inference with AWQ☆55Updated 8 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆115Updated last year
- benchmarking some transformer deployments☆26Updated 2 years ago
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆45Updated last month
- ☆72Updated 10 months ago
- Customized matrix multiplication kernels☆56Updated 3 years ago
- ☆69Updated 2 years ago
- Test suite for probing the numerical behavior of NVIDIA tensor cores☆41Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- ☆12Updated 4 years ago
- GPTQ inference TVM kernel☆40Updated last year
- Fork of upstream onnxruntime focused on supporting risc-v accelerators☆87Updated 2 years ago
- A bunch of kernels that might make stuff slower 😉☆59Updated this week
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 2 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆82Updated last week
- ☆74Updated 6 months ago
- ☆56Updated last year
- ☆38Updated last year
- Fast sparse deep learning on CPUs☆56Updated 2 years ago
- ☆166Updated this week
- ☆50Updated last year
- SparseTIR: Sparse Tensor Compiler for Deep Learning☆138Updated 2 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆107Updated last year
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆181Updated 3 weeks ago