SqueezeBits / owlite-examplesLinks
OwLite Examples repository offers illustrative example codes to help users seamlessly compress PyTorch deep learning models and transform them into TensorRT engines.
☆10Updated 9 months ago
Alternatives and similar repositories for owlite-examples
Users that are interested in owlite-examples are comparing it to the libraries listed below
Sorting:
- OwLite is a low-code AI model compression toolkit for AI models.☆46Updated 2 months ago
- Study Group of Deep Learning Compiler☆161Updated 2 years ago
- ☆61Updated last year
- ☆56Updated 2 years ago
- ☆54Updated 8 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆63Updated last year
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆257Updated last month
- Provides the examples to write and build Habana custom kernels using the HabanaTools☆22Updated 3 months ago
- ☆152Updated 2 years ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆117Updated last year
- ☆206Updated 3 years ago
- A performance library for machine learning applications.☆184Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆111Updated last week
- ☆73Updated last month
- ☆102Updated 2 years ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆253Updated 2 years ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆17Updated last year
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆110Updated 7 months ago
- This repository contains integer operators on GPUs for PyTorch.☆206Updated last year
- PyTorch CoreSIG☆55Updated 6 months ago
- The official NetsPresso Python package.☆45Updated 3 weeks ago
- ☆237Updated 2 years ago
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆27Updated last year
- LaLaRAND: Flexible Layer-by-Layer CPU/GPU Scheduling for Real-Time DNN Tasks☆15Updated 3 years ago
- ☆103Updated last year
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated 2 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆123Updated last month
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- ☆90Updated last year
- Lightweight and Parallel Deep Learning Framework☆264Updated 2 years ago