GATECH-EIC / FracTrainLinks
[NeurIPS 2020] "FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training" by Yonggan Fu, Haoran You, Yang Zhao, Yue Wang, Chaojian Li, Kailash Gopalakrishnan, Zhangyang Wang, Yingyan Lin
☆11Updated 3 years ago
Alternatives and similar repositories for FracTrain
Users that are interested in FracTrain are comparing it to the libraries listed below
Sorting:
- [ICML 2021] "Double-Win Quant: Aggressively Winning Robustness of Quantized DeepNeural Networks via Random Precision Training and Inferen…☆14Updated 3 years ago
- Simulator for BitFusion☆100Updated 5 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆41Updated 4 years ago
- Code for ICML 2021 submission☆34Updated 4 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆57Updated 2 years ago
- ☆107Updated last year
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 3 years ago
- ☆39Updated 2 years ago
- PyTorch implementation of EdMIPS: https://arxiv.org/pdf/2004.05795.pdf☆59Updated 5 years ago
- ☆19Updated 3 years ago
- ☆76Updated 3 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year
- Any-Precision Deep Neural Networks (AAAI 2021)☆61Updated 5 years ago
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆111Updated 2 years ago
- DNN quantization with outlier channel splitting☆113Updated 5 years ago
- ☆33Updated 3 years ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆22Updated 5 months ago
- A pytorch implementation of DoReFa-Net☆132Updated 5 years ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆23Updated last year
- A collection of research papers on efficient training of DNNs☆69Updated 3 years ago
- This project is the official implementation of our accepted ICLR 2022 paper BiBERT: Accurate Fully Binarized BERT.☆88Updated 2 years ago
- MNSIM_Python_v1.0. The former circuits-level version link: https://github.com/Zhu-Zhenhua/MNSIM_V1.1☆34Updated last year
- An open-sourced PyTorch library for developing energy efficient multiplication-less models and applications.☆13Updated 6 months ago
- code for the paper "A Statistical Framework for Low-bitwidth Training of Deep Neural Networks"☆28Updated 4 years ago
- some docs for rookies in nics-efc☆22Updated 3 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆74Updated 4 years ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆13Updated 3 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago