SqueezeBits / owlite-examples
OwLite Examples repository offers illustrative example codes to help users seamlessly compress PyTorch deep learning models and transform them into TensorRT engines.
☆9Updated 4 months ago
Alternatives and similar repositories for owlite-examples:
Users that are interested in owlite-examples are comparing it to the libraries listed below
- ☆47Updated 2 months ago
- OwLite is a low-code AI model compression toolkit for AI models.☆39Updated 4 months ago
- ☆56Updated 2 years ago
- Study Group of Deep Learning Compiler☆156Updated 2 years ago
- ☆48Updated 9 months ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆56Updated 10 months ago
- ☆197Updated 3 years ago
- ☆11Updated 10 months ago
- ☆25Updated 2 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆93Updated last month
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆114Updated 10 months ago
- PyTorch CoreSIG☆54Updated last month
- ☆47Updated 3 years ago
- ☆132Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆80Updated 3 weeks ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆38Updated 4 years ago
- Study parallel programming - CUDA, OpenMP, MPI, Pthread☆56Updated 2 years ago
- ☆102Updated last year
- Lightweight and Parallel Deep Learning Framework☆263Updated 2 years ago
- one-shot-tuner☆8Updated 2 years ago
- This repository is a meta package to provide Samsung OneMCC (Memory Coupled Computing) infrastructure.☆27Updated last year
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆13Updated 6 months ago
- A performance library for machine learning applications.☆183Updated last year
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆72Updated 7 months ago
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆60Updated 4 years ago
- ☆32Updated 2 years ago
- ☆83Updated 10 months ago
- PyTorch extension for emulating FP8 data formats on standard FP32 Xeon/GPU hardware.☆103Updated last month
- Artifact repository for paper Automatic Generation of High-Performance Quantized Machine Learning Kernels☆17Updated 4 years ago
- Official implementation of "Searching for Winograd-aware Quantized Networks" (MLSys'20)☆27Updated last year