teddykoker / torchsortLinks
Fast, differentiable sorting and ranking in PyTorch
☆844Updated 5 months ago
Alternatives and similar repositories for torchsort
Users that are interested in torchsort are comparing it to the libraries listed below
Sorting:
- Fast Differentiable Sorting and Ranking☆612Updated last year
- KErnel OPerationS, on CPUs and GPUs, with autodiff and without memory overflows☆1,139Updated last week
- Cockpit: A Practical Debugging Tool for Training Deep Neural Networks☆484Updated 3 years ago
- BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.☆595Updated 10 months ago
- Profiling and inspecting memory in pytorch☆1,074Updated 2 months ago
- Type annotations and dynamic checking for a tensor's shape, dtype, names, etc.☆1,459Updated 6 months ago
- Geometric loss functions between point clouds, images and volumes☆663Updated 6 months ago
- functorch is JAX-like composable function transforms for PyTorch.☆1,436Updated 2 months ago
- Constrained optimization toolkit for PyTorch☆701Updated 3 months ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆438Updated last year
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,627Updated 3 years ago
- [NeurIPS'19] Deep Equilibrium Models☆776Updated 3 years ago
- Code for our NeurIPS 2022 paper☆369Updated 2 years ago
- Fast Block Sparse Matrices for Pytorch☆547Updated 4 years ago
- An implementation of Performer, a linear attention-based transformer, in Pytorch☆1,156Updated 3 years ago
- Ranger deep learning optimizer rewrite to use newest components☆338Updated last year
- Repository for NeurIPS 2020 Spotlight "AdaBelief Optimizer: Adapting stepsizes by the belief in observed gradients"☆1,065Updated last year
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆436Updated last year
- ☆784Updated this week
- Code for "Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning"☆415Updated last year
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆811Updated last year
- ☆383Updated 2 years ago
- A library to inspect and extract intermediate layers of PyTorch models.☆475Updated 3 years ago
- A pytorch port of google-research/google-research/robust_loss/☆691Updated 2 months ago
- Pytorch library for fast transformer implementations☆1,749Updated 2 years ago
- Riemannian Adaptive Optimization Methods with pytorch optim☆999Updated 3 months ago
- Approximate nearest neighbor search with product quantization on GPU in pytorch and cuda☆228Updated last year
- ML Collections is a library of Python Collections designed for ML use cases.☆994Updated this week
- The entmax mapping and its loss, a family of sparse softmax alternatives.☆451Updated last year
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 4 years ago