BlackHC / tomaLinks
Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory
☆438Updated 10 months ago
Alternatives and similar repositories for toma
Users that are interested in toma are comparing it to the libraries listed below
Sorting:
- A library to inspect and extract intermediate layers of PyTorch models.☆473Updated 3 years ago
- Cockpit: A Practical Debugging Tool for Training Deep Neural Networks☆480Updated 3 years ago
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- Configuration classes enabling type-safe PyTorch configuration for Hydra apps☆219Updated 2 years ago
- ML Collections is a library of Python Collections designed for ML use cases.☆963Updated last week
- PyTorch dataset extended with map, cache etc. (tensorflow.data like)☆329Updated 3 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆251Updated 2 years ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆235Updated 5 months ago
- Deep Learning project template best practices with Pytorch Lightning, Hydra, Tensorboard.☆159Updated 4 years ago
- Library for 8-bit optimizers and quantization routines.☆716Updated 2 years ago
- functorch is JAX-like composable function transforms for PyTorch.☆1,432Updated this week
- Type annotations and dynamic checking for a tensor's shape, dtype, names, etc.☆1,433Updated 2 months ago
- TensorDict is a pytorch dedicated tensor container.☆937Updated this week
- Tensors, for human consumption☆1,263Updated 3 weeks ago
- Fast Block Sparse Matrices for Pytorch☆548Updated 4 years ago
- ☆350Updated this week
- Profiling and inspecting memory in pytorch☆1,061Updated 11 months ago
- ☆780Updated last month
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- FFCV-SSL Fast Forward Computer Vision for Self-Supervised Learning.☆208Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆386Updated last week
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- ☆208Updated 2 years ago
- For optimization algorithm research and development.☆521Updated this week
- Code for our NeurIPS 2022 paper☆369Updated 2 years ago
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆322Updated this week
- Pytorch Lightning Distributed Accelerators using Ray☆211Updated last year
- Provides everything needed for high performance data loading and augmentation in pytorch.☆319Updated last year
- Fast, differentiable sorting and ranking in PyTorch☆817Updated last month
- Probing the representations of Vision Transformers.☆326Updated 2 years ago