SqueezeBits / owlite-examples
OwLite Examples repository offers illustrative example codes to help users seamlessly compress PyTorch deep learning models and transform them into TensorRT engines.
☆10Updated 6 months ago
Alternatives and similar repositories for owlite-examples:
Users that are interested in owlite-examples are comparing it to the libraries listed below
- OwLite is a low-code AI model compression toolkit for AI models.☆43Updated 2 months ago
- ☆56Updated 2 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆60Updated last year
- Study Group of Deep Learning Compiler☆158Updated 2 years ago
- ☆52Updated 5 months ago
- ☆55Updated last year
- A performance library for machine learning applications.☆184Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆116Updated last year
- ☆83Updated last year
- PyTorch CoreSIG☆55Updated 3 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆104Updated last week
- Provides the examples to write and build Habana custom kernels using the HabanaTools☆21Updated last week
- ☆143Updated 2 years ago
- ☆101Updated last year
- ☆66Updated last month
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆217Updated last week
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆14Updated 9 months ago
- Official Implementation of "Genie: Show Me the Data for Quantization" (CVPR 2023)☆18Updated last year
- ☆33Updated last week
- one-shot-tuner☆8Updated 2 years ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆109Updated 2 months ago
- ☆95Updated last year
- nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study☆13Updated 4 years ago
- This repository contains integer operators on GPUs for PyTorch.☆202Updated last year
- [ACM EuroSys '23] Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access☆56Updated last year
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆60Updated 4 years ago
- ☆25Updated 2 years ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆108Updated 4 months ago
- Study parallel programming - CUDA, OpenMP, MPI, Pthread☆56Updated 2 years ago
- ☆229Updated 2 years ago