TQT's pytorch implementation.
☆21Dec 17, 2021Updated 4 years ago
Alternatives and similar repositories for TQT
Users that are interested in TQT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆10Mar 2, 2022Updated 4 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- ☆19Mar 16, 2022Updated 4 years ago
- The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)☆139Nov 19, 2020Updated 5 years ago
- [CVPR 2022] AlignQ: Alignment Quantization with ADMM-based Correlation Preservation☆11Jan 6, 2023Updated 3 years ago
- ☆19Mar 17, 2021Updated 5 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Jul 7, 2022Updated 3 years ago
- An official PyTorch implementation of the paper "Distance-aware Quantization", ICCV 2021.☆48Nov 1, 2024Updated last year
- Generate an FPGA design for a TWN☆11Nov 4, 2019Updated 6 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Jul 8, 2020Updated 5 years ago
- ☆16Jan 20, 2021Updated 5 years ago
- ☆23Oct 7, 2021Updated 4 years ago
- Train and deploy LUT-based neural networks on FPGAs☆107Jun 12, 2024Updated last year
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆50Feb 26, 2025Updated last year
- MaxEVA: Maximizing the Efficiency of Matrix Multiplication on Versal AI Engine (accepted as full paper at FPT'23)☆22Apr 17, 2024Updated last year
- ☆15Oct 26, 2022Updated 3 years ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated last year
- ETHZ Heterogeneous Accelerated Compute Cluster.☆38Oct 7, 2025Updated 5 months ago
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Jun 19, 2020Updated 5 years ago
- ☆81Jul 21, 2022Updated 3 years ago
- This project implements a convolution kernel based on vivado HLS on zcu104☆36Mar 15, 2020Updated 6 years ago
- An official implementation of "Network Quantization with Element-wise Gradient Scaling" (CVPR 2021) in PyTorch.☆94Jul 14, 2023Updated 2 years ago
- Artifact for IPDPS'21: DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolutions.☆13Apr 6, 2021Updated 4 years ago
- ☆23Mar 2, 2026Updated 3 weeks ago
- Neural Network Quantization With Fractional Bit-widths☆11Feb 19, 2021Updated 5 years ago
- ☆176Aug 9, 2023Updated 2 years ago
- Sample scripts for FPGA-based AI Edge Contest 2019☆11Mar 20, 2020Updated 6 years ago
- ☆35Mar 1, 2019Updated 7 years ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆18Aug 30, 2024Updated last year
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 3 years ago
- FPGA acceleration of arbitrary precision floating point computations.☆40May 17, 2022Updated 3 years ago
- SmartNIC☆14Dec 13, 2018Updated 7 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Sep 6, 2022Updated 3 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28May 11, 2022Updated 3 years ago
- Post-training sparsity-aware quantization☆34Feb 26, 2023Updated 3 years ago
- HLS Custom-Precision Floating-Point Library☆13Nov 6, 2017Updated 8 years ago
- Implementation and optimization of matrix multiplication on single CPU (HPC-THU-2023-Autumn)☆18Feb 27, 2024Updated 2 years ago
- An external memory allocator example for PyTorch.☆16Aug 10, 2025Updated 7 months ago
- Binary neural networks developed by Huawei Noah's Ark Lab☆29Feb 19, 2021Updated 5 years ago