spfrommer / torchexplorerLinks
Interactively inspect module inputs, outputs, parameters, and gradients.
☆353Updated last month
Alternatives and similar repositories for torchexplorer
Users that are interested in torchexplorer are comparing it to the libraries listed below
Sorting:
- Package for extracting and mapping the results of every single tensor operation in a PyTorch model in one line of code.☆636Updated 4 months ago
- Annotated version of the Mamba paper☆496Updated last year
- torchview: visualize pytorch models☆1,028Updated 8 months ago
- TensorHue is a Python library that allows you to visualize tensors right in your console, making understanding and debugging tensor conte…☆124Updated 11 months ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆246Updated last week
- TensorDict is a pytorch dedicated tensor container.☆1,003Updated this week
- For optimization algorithm research and development.☆558Updated last month
- ☆132Updated 2 years ago
- Build high-performance AI models with modular building blocks☆577Updated this week
- ☆794Updated last week
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆341Updated last year
- MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvement…☆406Updated last week
- optimizer & lr scheduler & loss function collections in PyTorch☆387Updated this week
- A Simplified PyTorch Implementation of Vision Transformer (ViT)☆236Updated last year
- A easy, reliable, fluid template for python packages complete with docs, testing suites, readme's, github workflows, linting and much muc…☆199Updated last week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆595Updated 6 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆562Updated last year
- The AdEMAMix Optimizer: Better, Faster, Older.☆186Updated last year
- When it comes to optimizers, it's always better to be safe than sorry☆402Updated 4 months ago
- ☆177Updated 2 years ago
- CIFAR-10 speedruns: 94% in 2.6 seconds and 96% in 27 seconds☆352Updated 2 months ago
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆434Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Speed up model training by fixing data loading.☆575Updated last week
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆446Updated 2 weeks ago
- Best practices & guides on how to write distributed pytorch training code☆576Updated 3 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆452Updated 8 months ago
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆381Updated last year
- ☆292Updated last year
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,440Updated last week