spfrommer / torchexplorerLinks
Interactively inspect module inputs, outputs, parameters, and gradients.
☆345Updated 2 months ago
Alternatives and similar repositories for torchexplorer
Users that are interested in torchexplorer are comparing it to the libraries listed below
Sorting:
- Package for extracting and mapping the results of every single tensor operation in a PyTorch model in one line of code.☆595Updated 4 months ago
- torchview: visualize pytorch models☆964Updated 2 months ago
- Annotated version of the Mamba paper☆486Updated last year
- TensorDict is a pytorch dedicated tensor container.☆942Updated this week
- For optimization algorithm research and development.☆521Updated this week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆506Updated this week
- Build high-performance AI models with modular building blocks☆533Updated this week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆426Updated 7 months ago
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆388Updated last week
- TensorHue is a Python library that allows you to visualize tensors right in your console, making understanding and debugging tensor conte…☆118Updated 4 months ago
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆193Updated last year
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆235Updated 6 months ago
- A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much muc…☆183Updated 3 months ago
- ☆780Updated last month
- ☆133Updated last year
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆560Updated this week
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆323Updated 6 months ago
- Muon is an optimizer for hidden layers in neural networks☆1,092Updated this week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆429Updated 2 months ago
- Helpful tools and examples for working with flex-attention☆876Updated last week
- Kolmogorov-Arnold Networks (KAN) using Chebyshev polynomials instead of B-splines.☆381Updated last year
- When it comes to optimizers, it's always better to be safe than sorry☆302Updated 3 months ago
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆380Updated last year
- ☆292Updated 7 months ago
- Best practices & guides on how to write distributed pytorch training code☆450Updated 4 months ago
- LoRA and DoRA from Scratch Implementations☆206Updated last year
- A pytorch quantization backend for optimum☆963Updated 2 weeks ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆350Updated last year
- An extension of the nanoGPT repository for training small MOE models.☆162Updated 4 months ago
- The AdEMAMix Optimizer: Better, Faster, Older.☆183Updated 10 months ago