TQT's pytorch implementation.
☆21Dec 17, 2021Updated 4 years ago
Alternatives and similar repositories for TQT
Users that are interested in TQT are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆10Mar 2, 2022Updated 4 years ago
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆14Feb 3, 2025Updated last year
- ☆19Mar 16, 2022Updated 4 years ago
- The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)☆138Nov 19, 2020Updated 5 years ago
- [CVPR 2022] AlignQ: Alignment Quantization with ADMM-based Correlation Preservation☆11Jan 6, 2023Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [TCAD 2021] Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA☆17Jul 7, 2022Updated 3 years ago
- ☆19Mar 17, 2021Updated 5 years ago
- An official PyTorch implementation of the paper "Distance-aware Quantization", ICCV 2021.☆48Nov 1, 2024Updated last year
- Generate an FPGA design for a TWN☆11Nov 4, 2019Updated 6 years ago
- Designs for finalist teams of the DAC System Design Contest☆37Jul 8, 2020Updated 5 years ago
- A simple Python implementation of forward-forward NN training by G. Hinton from NeurIPS 2022☆21Dec 2, 2022Updated 3 years ago
- ☆16Jan 20, 2021Updated 5 years ago
- ☆23Oct 7, 2021Updated 4 years ago
- FlexASR: A Reconfigurable Hardware Accelerator for Attention-based Seq-to-Seq Networks☆50Feb 26, 2025Updated last year
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- MaxEVA: Maximizing the Efficiency of Matrix Multiplication on Versal AI Engine (accepted as full paper at FPT'23)☆22Apr 17, 2024Updated last year
- Train and deploy LUT-based neural networks on FPGAs☆111Jun 12, 2024Updated last year
- ☆15Oct 26, 2022Updated 3 years ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆25Mar 29, 2024Updated 2 years ago
- ETHZ Heterogeneous Accelerated Compute Cluster.☆38Oct 7, 2025Updated 6 months ago
- Training Quantized Neural Networks with a Full-precision Auxiliary Module☆13Jun 19, 2020Updated 5 years ago
- ☆81Jul 21, 2022Updated 3 years ago
- This project implements a convolution kernel based on vivado HLS on zcu104☆36Mar 15, 2020Updated 6 years ago
- An official implementation of "Network Quantization with Element-wise Gradient Scaling" (CVPR 2021) in PyTorch.☆94Jul 14, 2023Updated 2 years ago
- Deploy open-source AI quickly and easily - Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Artifact for IPDPS'21: DSXplore: Optimizing Convolutional Neural Networks via Sliding-Channel Convolutions.☆13Apr 6, 2021Updated 5 years ago
- ☆23Mar 2, 2026Updated last month
- Neural Network Quantization With Fractional Bit-widths☆11Feb 19, 2021Updated 5 years ago
- ☆179Aug 9, 2023Updated 2 years ago
- Sample scripts for FPGA-based AI Edge Contest 2019☆11Mar 20, 2020Updated 6 years ago
- ☆35Mar 1, 2019Updated 7 years ago
- This is a repository of Binary General Matrix Multiply (BGEMM) by customized CUDA kernel. Thank FP6-LLM for the wheels!☆19Aug 30, 2024Updated last year
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 3 years ago
- DeiT implementation for Q-ViT☆25Apr 21, 2025Updated 11 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- FPGA acceleration of arbitrary precision floating point computations.☆40May 17, 2022Updated 3 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆14Sep 6, 2022Updated 3 years ago
- SmartNIC☆14Dec 13, 2018Updated 7 years ago
- [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks☆15May 18, 2022Updated 3 years ago
- A FPGA-based neural network inference accelerator, which won the third place in DAC-SDC☆28May 11, 2022Updated 3 years ago
- Post-training sparsity-aware quantization☆34Feb 26, 2023Updated 3 years ago
- HLS Custom-Precision Floating-Point Library☆13Nov 6, 2017Updated 8 years ago