PhilJd / contiguous_pytorch_params
Accelerate training by storing parameters in one contiguous chunk of memory.
☆291Updated 4 years ago
Alternatives and similar repositories for contiguous_pytorch_params
Users that are interested in contiguous_pytorch_params are comparing it to the libraries listed below
Sorting:
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆374Updated 4 years ago
- lookahead optimizer (Lookahead Optimizer: k steps forward, 1 step back) for pytorch☆335Updated 5 years ago
- 🛠 Toolbox to extend PyTorch functionalities☆420Updated last year
- Experimental ground for optimizing memory of pytorch models☆365Updated 7 years ago
- PyTorch layer-by-layer model profiler☆607Updated 3 years ago
- Example code showing how to use Nvidia DALI in pytorch, with fallback to torchvision. Contains a few differences to the official Nvidia …☆197Updated 5 years ago
- Deep Learning Experiment Management☆639Updated 2 years ago
- Official PyTorch Repo for "ReZero is All You Need: Fast Convergence at Large Depth"☆408Updated 9 months ago
- Useful PyTorch functions and modules that are not implemented in PyTorch by default☆187Updated last year
- Simple package that makes your generator work in background thread☆280Updated 2 years ago
- ☆165Updated 6 years ago
- Implementations of ideas from recent papers☆393Updated 4 years ago
- Demystify RAM Usage in Multi-Process Data Loaders☆193Updated 2 years ago
- Pytorch implementation of network design paradigm described in the paper "Designing Network Design Spaces"☆185Updated 9 months ago
- A New Optimization Technique for Deep Neural Networks☆535Updated 3 years ago
- Over9000 optimizer☆427Updated 2 years ago
- Apollo: An Adaptive Parameter-wise Diagonal Quasi-Newton Method for Nonconvex Stochastic Optimization☆183Updated 3 years ago
- A general and accurate MACs / FLOPs profiler for PyTorch models☆604Updated last year
- pytorch implement of Lookahead Optimizer☆188Updated 2 years ago
- Sublinear memory optimization for deep learning. https://arxiv.org/abs/1604.06174☆598Updated 5 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆200Updated 2 years ago
- DeLighT: Very Deep and Light-Weight Transformers☆467Updated 4 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆484Updated 4 years ago
- torchsummaryX: Improved visualization tool of torchsummary☆303Updated 3 years ago
- Implementation of the 😇 Attention layer from the paper, Scaling Local Self-Attention For Parameter Efficient Visual Backbones☆199Updated 4 years ago
- ☆169Updated 4 years ago
- MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks. In NeurIPS 2020 workshop.☆695Updated 3 years ago
- Library for faster pinned CPU <-> GPU transfer in Pytorch☆685Updated 5 years ago
- Official Pytorch Implementation of "TResNet: High-Performance GPU-Dedicated Architecture" (WACV 2021)☆475Updated 5 months ago