EfficientDL / book
PDFs and Codelabs for the Efficient Deep Learning book.
☆189Updated last year
Related projects ⓘ
Alternatives and complementary repositories for book
- Outlining techniques for improving the training performance of your PyTorch model without compromising its accuracy☆124Updated last year
- Context Manager to profile the forward and backward times of PyTorch's nn.Module☆83Updated last year
- ☆127Updated last year
- Awesome machine learning model compression research papers, quantization, tools, and learning material.☆491Updated 2 months ago
- Prune a model while finetuning or training.☆394Updated 2 years ago
- TF2 implementation of knowledge distillation using the "function matching" hypothesis from https://arxiv.org/abs/2106.05237.☆87Updated 3 years ago
- A library for researching neural networks compression and acceleration methods.☆136Updated 2 months ago
- A curated list of awesome resources combining Transformers with Neural Architecture Search☆260Updated last year
- ☆195Updated 3 years ago
- MinT: Minimal Transformer Library and Tutorials☆248Updated 2 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆296Updated 3 years ago
- This repository contains an overview of important follow-up works based on the original Vision Transformer (ViT) by Google.☆149Updated 2 years ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆216Updated last week
- ☆73Updated 3 years ago
- Torch Distributed Experimental☆116Updated 3 months ago
- PyTea: PyTorch Tensor shape error analyzer☆316Updated 2 years ago
- https://slds-lmu.github.io/seminar_multimodal_dl/☆163Updated last year
- Host repository for the "Reproducible Deep Learning" PhD course☆405Updated 2 years ago
- 📑 Dive into Big Model Training☆110Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆254Updated 2 months ago
- A research library for pytorch-based neural network pruning, compression, and more.☆160Updated last year
- Implementation of a Transformer, but completely in Triton☆249Updated 2 years ago
- ☆111Updated 8 months ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆201Updated last year
- FasterAI: Prune and Distill your models with FastAI and PyTorch☆243Updated 3 weeks ago
- Curated list of awesome material on optimization techniques to make artificial intelligence faster and more efficient 🚀☆112Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆165Updated this week
- papers about model compression☆165Updated last year
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆104Updated last year
- Accelerate PyTorch models with ONNX Runtime☆356Updated 2 months ago