DeMoriarty / TorchPQLinks
Approximate nearest neighbor search with product quantization on GPU in pytorch and cuda
☆229Updated last year
Alternatives and similar repositories for TorchPQ
Users that are interested in TorchPQ are comparing it to the libraries listed below
Sorting:
- Sinkhorn Transformer - Practical implementation of Sparse Sinkhorn Attention☆269Updated 4 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆219Updated 2 years ago
- Fully featured implementation of Routing Transformer☆298Updated 4 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆250Updated 3 years ago
- This is a pytorch implementation of k-means clustering algorithm☆334Updated 9 months ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆228Updated 3 years ago
- TF/Keras code for DiffStride, a pooling layer with learnable strides.☆124Updated 3 years ago
- An implementation of local windowed attention for language modeling☆488Updated 4 months ago
- Implementation of Linformer for Pytorch☆302Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆384Updated 2 years ago
- Sequence modeling with Mega.☆301Updated 2 years ago
- ☆164Updated 2 years ago
- Block-sparse primitives for PyTorch☆160Updated 4 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆237Updated 2 years ago
- ☆387Updated 2 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆301Updated 5 months ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 4 years ago
- Fast Block Sparse Matrices for Pytorch☆550Updated 4 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆483Updated 4 years ago
- A small demonstration of using WebDataset with ImageNet and PyTorch Lightning☆75Updated last year
- Efficient, check-pointed data loading for deep learning with massive data sets.☆210Updated 2 years ago
- Efficient reservoir sampling implementation for PyTorch☆107Updated 4 years ago
- Demystify RAM Usage in Multi-Process Data Loaders☆205Updated 2 years ago
- Experimental ground for optimizing memory of pytorch models☆366Updated 7 years ago
- Implementation of a Transformer, but completely in Triton☆277Updated 3 years ago
- Tiny PyTorch library for maintaining a moving average of a collection of parameters.☆441Updated last year
- Profiling and inspecting memory in pytorch☆1,076Updated 3 months ago
- Transformer based on a variant of attention that is linear complexity in respect to sequence length☆815Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆110Updated 4 years ago