SqueezeBits / owlite-examples
OwLite Examples repository offers illustrative example codes to help users seamlessly compress PyTorch deep learning models and transform them into TensorRT engines.
☆9Updated last month
Related projects ⓘ
Alternatives and complementary repositories for owlite-examples
- ☆44Updated last week
- OwLite is a low-code AI model compression toolkit for AI models.☆38Updated last month
- ☆56Updated 2 years ago
- Study Group of Deep Learning Compiler☆155Updated last year
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆58Updated last month
- ☆18Updated 5 months ago
- ☆100Updated last year
- A performance library for machine learning applications.☆180Updated last year
- Neural Network Acceleration using CPU/GPU, ASIC, FPGA☆60Updated 4 years ago
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆53Updated 8 months ago
- ☆40Updated 7 months ago
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆54Updated 7 months ago
- ☆47Updated 2 years ago
- ☆83Updated 7 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆112Updated 8 months ago
- NeuPIMs Simulator☆54Updated 5 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆12Updated 4 months ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆83Updated 3 months ago
- Study parallel programming - CUDA, OpenMP, MPI, Pthread☆54Updated 2 years ago
- ☆123Updated last year
- one-shot-tuner☆8Updated last year
- ☆24Updated last year
- ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference☆68Updated last week
- NEST Compiler☆116Updated 4 months ago
- ☆10Updated last year
- ☆11Updated last month
- Lightweight and Parallel Deep Learning Framework☆263Updated last year
- FriendliAI Model Hub☆89Updated 2 years ago
- ☆11Updated 10 months ago
- nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study☆13Updated 4 years ago