sIncerass / QBERTLinks
☆15Updated 2 years ago
Alternatives and similar repositories for QBERT
Users that are interested in QBERT are comparing it to the libraries listed below
Sorting:
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- The official PyTorch implementation of the NeurIPS2022 (spotlight) paper, Outlier Suppression: Pushing the Limit of Low-bit Transformer L…☆48Updated 2 years ago
- A collection of research papers on efficient training of DNNs☆69Updated 3 years ago
- ☆19Updated 3 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Updated 3 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆20Updated last year
- Adaptive floating-point based numerical format for resilient deep learning☆14Updated 3 years ago
- ☆70Updated last year
- Torch2Chip (MLSys, 2024)☆53Updated 5 months ago
- LLM Inference with Microscaling Format☆31Updated 10 months ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆42Updated 4 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆66Updated last year
- [HPCA'21] SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning☆102Updated last year
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- The official implementation of the DAC 2024 paper GQA-LUT☆20Updated 9 months ago
- ☆31Updated 2 weeks ago
- DeiT implementation for Q-ViT☆24Updated 5 months ago
- This repo contains the code for studying the interplay between quantization and sparsity methods☆23Updated 6 months ago
- Flexible simulator for mixed precision and format simulation of LLMs and vision transformers.☆51Updated 2 years ago
- ☆28Updated 5 months ago
- Training with Block Minifloat number representation☆16Updated 4 years ago
- ☆44Updated last year
- ☆108Updated last year
- DOSA: Differentiable Model-Based One-Loop Search for DNN Accelerators☆17Updated 11 months ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆57Updated 2 years ago
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Updated 2 years ago
- HW/SW co-design of sentence-level energy optimizations for latency-aware multi-task NLP inference☆52Updated last year
- Quantization in the Jagged Loss Landscape of Vision Transformers☆13Updated last year
- [HPCA 2023] ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design☆116Updated 2 years ago
- ☆33Updated last year