SqueezeBits / owlite-examplesLinks
OwLite Examples repository offers illustrative example codes to help users seamlessly compress PyTorch deep learning models and transform them into TensorRT engines.
☆10Updated 10 months ago
Alternatives and similar repositories for owlite-examples
Users that are interested in owlite-examples are comparing it to the libraries listed below
Sorting:
- Study Group of Deep Learning Compiler☆161Updated 2 years ago
- ☆56Updated 2 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆64Updated last year
- ☆54Updated 8 months ago
- OwLite is a low-code AI model compression toolkit for AI models.☆49Updated 2 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- ☆67Updated last year
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆270Updated last month
- ☆73Updated 2 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆111Updated last month
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆15Updated 3 years ago
- ☆154Updated 2 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 8 months ago
- PyTorch CoreSIG☆56Updated 7 months ago
- ☆205Updated 3 years ago
- ☆103Updated 2 years ago
- ☆90Updated last year
- Official implementation of EMNLP'23 paper "Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?"☆23Updated last year
- ☆50Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆211Updated last year
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆255Updated 2 years ago
- ☆237Updated 2 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆20Updated last year
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆41Updated 4 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆127Updated 3 weeks ago
- ☆24Updated 8 months ago
- A performance library for machine learning applications.☆184Updated last year
- NEST Compiler☆117Updated 6 months ago
- ☆51Updated last year
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year