TQT's pytorch implementation.
☆21Dec 17, 2021Updated 4 years ago
Alternatives and similar repositories for TQT
Users that are interested in TQT are comparing it to the libraries listed below
Sorting:
- ☆10Mar 2, 2022Updated 4 years ago
- ☆19Mar 16, 2022Updated 3 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- ☆16Jan 20, 2021Updated 5 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Jul 8, 2020Updated 5 years ago
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Jul 7, 2022Updated 3 years ago
- ☆15Oct 26, 2022Updated 3 years ago
- ☆19Mar 17, 2021Updated 4 years ago
- The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)☆139Nov 19, 2020Updated 5 years ago
- ☆23Oct 7, 2021Updated 4 years ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Mar 29, 2024Updated last year
- An official PyTorch implementation of the paper "Distance-aware Quantization", ICCV 2021.☆48Nov 1, 2024Updated last year
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆51Feb 26, 2025Updated last year
- Sample scripts for FPGA-based AI Edge Contest 2019☆12Mar 20, 2020Updated 5 years ago
- [CVPR 2022] AlignQ: Alignment Quantization with ADMM-based Correlation Preservation☆11Jan 6, 2023Updated 3 years ago
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Jun 19, 2020Updated 5 years ago
- Train and deploy LUT-based neural networks on FPGAs☆107Jun 12, 2024Updated last year
- A collection of URLs related to High Level Synthesis (HLS).☆13Jun 26, 2021Updated 4 years ago
- Generate an FPGA design for a TWN☆10Nov 4, 2019Updated 6 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28May 11, 2022Updated 3 years ago
- Artifact for IPDPS'21: DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolutions.☆13Apr 6, 2021Updated 4 years ago
- You Only Search Once: On Lightweight Differentiable Architecture Search for Resource-Constrained Embedded Platforms☆12Apr 17, 2023Updated 2 years ago
- Neural Network Quantization With Fractional Bit-widths☆11Feb 19, 2021Updated 5 years ago
- The official implementation of "NAS-BNN: Neural Architecture Search for Binary Neural Networks"☆13Aug 30, 2024Updated last year
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Sep 6, 2022Updated 3 years ago
- ☆32Mar 31, 2025Updated 11 months ago
- [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks☆15May 18, 2022Updated 3 years ago
- Post-training sparsity-aware quantization☆34Feb 26, 2023Updated 3 years ago
- The code for Joint Neural Architecture Search and Quantization☆14Apr 10, 2019Updated 6 years ago
- SmartNIC☆14Dec 13, 2018Updated 7 years ago
- An external memory allocator example for PyTorch.☆16Aug 10, 2025Updated 6 months ago
- ETHZ Heterogeneous Accelerated Compute Cluster.☆38Oct 7, 2025Updated 4 months ago
- FPGA acceleration of arbitrary precision floating point computations.☆40May 17, 2022Updated 3 years ago
- ☆35Mar 1, 2019Updated 7 years ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆18Aug 30, 2024Updated last year
- ☆17Nov 20, 2022Updated 3 years ago
- ☆23Feb 10, 2026Updated 3 weeks ago
- This project implements a convolution kernel based on vivado HLS on zcu104☆36Mar 15, 2020Updated 5 years ago
- Implementation and optimization of matrix multiplication on single CPU (HPC-THU-2023-Autumn)☆18Feb 27, 2024Updated 2 years ago