SqueezeBits / owlite-examples
OwLite Examples repository offers illustrative example codes to help users seamlessly compress PyTorch deep learning models and transform them into TensorRT engines.
☆10Updated 7 months ago
Alternatives and similar repositories for owlite-examples
Users that are interested in owlite-examples are comparing it to the libraries listed below
Sorting:
- OwLite is a low-code AI model compression toolkit for AI models.☆43Updated 2 months ago
- ☆56Updated 2 years ago
- Study Group of Deep Learning Compiler☆158Updated 2 years ago
- ☆52Updated 6 months ago
- ☆55Updated last year
- PyTorch CoreSIG☆55Updated 4 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆61Updated last year
- ☆85Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆117Updated last year
- ☆67Updated last month
- A performance library for machine learning applications.☆184Updated last year
- ☆101Updated last year
- ☆47Updated last year
- ☆146Updated 2 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆113Updated 2 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆14Updated 10 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆106Updated last month
- ☆47Updated 3 years ago
- Official Implementation of "Genie: Show Me the Data for Quantization" (CVPR 2023)☆18Updated 2 years ago
- NEST Compiler☆116Updated 3 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆224Updated 3 weeks ago
- PyTorch extension enabling direct access to cuDNN-accelerated C++ convolution functions.☆13Updated 4 years ago
- Provides the examples to write and build Habana custom kernels using the HabanaTools☆21Updated last month
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆22Updated last year
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study☆13Updated 4 years ago
- ☆97Updated last year
- ☆21Updated 11 months ago
- ☆25Updated 2 years ago
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆13Updated 3 years ago