SqueezeBits / owlite-examples
OwLite Examples repository offers illustrative example codes to help users seamlessly compress PyTorch deep learning models and transform them into TensorRT engines.
☆9Updated last month
Related projects ⓘ
Alternatives and complementary repositories for owlite-examples
- OwLite is a low-code AI model compression toolkit for AI models.☆38Updated last month
- ☆56Updated last year
- ☆44Updated last month
- Study Group of Deep Learning Compiler☆152Updated last year
- Code for the AAAI 2024 Oral paper "OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Model…☆53Updated 8 months ago
- ☆18Updated 5 months ago
- LLMServingSim: A HW/SW Co-Simulation Infrastructure for LLM Inference Serving at Scale☆55Updated 2 weeks ago
- ☆100Updated last year
- ☆20Updated 5 months ago
- ☆83Updated 7 months ago
- ☆38Updated 7 months ago
- A performance library for machine learning applications.☆179Updated last year
- Codebase for ICML'24 paper: Learning from Students: Applying t-Distributions to Explore Accurate and Efficient Formats for LLMs☆24Updated 4 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆112Updated 8 months ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆36Updated 3 years ago
- ☆32Updated 6 months ago
- [HPCA'24] Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System☆34Updated 8 months ago
- Study parallel programming - CUDA, OpenMP, MPI, Pthread☆54Updated 2 years ago
- ☆11Updated 10 months ago
- ☆121Updated last year
- Fast and Efficient Model Serving Using Multi-GPUs with Direct-Host-Access (ACM EuroSys '23)☆54Updated 7 months ago
- NeuPIMs Simulator☆51Updated 4 months ago
- nnq_cnd_study stands for Neural Network Quantization & Compact Networks Design Study☆13Updated 4 years ago
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆81Updated 2 months ago
- Tender: Accelerating Large Language Models via Tensor Decompostion and Runtime Requantization (ISCA'24)☆12Updated 4 months ago
- NEST Compiler☆115Updated 4 months ago
- Experimental deep learning framework written in Rust☆14Updated 2 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated last year
- one-shot-tuner☆8Updated last year