dell / jltLinks
Johnson-Lindenstrauss transform (JLT), random projections (RP), fast Johnson-Lindenstrauss transform (FJLT), and randomized Hadamard transform (RHT) in python 3.x
☆20Updated 2 years ago
Alternatives and similar repositories for jlt
Users that are interested in jlt are comparing it to the libraries listed below
Sorting:
- [ICDCS 2023] Evaluation and Optimization of Gradient Compression for Distributed Deep Learning☆10Updated 2 years ago
- AN EFFICIENT AND GENERAL FRAMEWORK FOR LAYERWISE-ADAPTIVE GRADIENT COMPRESSION☆14Updated 2 years ago
- ☆62Updated 2 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Updated 4 years ago
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆33Updated 4 years ago
- [ICLR2023] NTK-SAP: Improving neural network pruning by aligning training dynamics☆20Updated 2 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Official implementation of Neurips 2020 "Sparse Weight Activation Training" paper.☆29Updated 4 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- ☆14Updated 2 months ago
- [NeurIPS 2022] Code for paper "Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation"☆27Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated 2 years ago
- A Sparse-tensor Communication Framework for Distributed Deep Learning☆13Updated 4 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆30Updated 2 years ago
- ☆16Updated 2 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 5 years ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆69Updated last year
- [ICLR 2023] "Sparsity May Cry: Let Us Fail (Current) Sparse Neural Networks Together!" Shiwei Liu, Tianlong Chen, Zhenyu Zhang, Xuxi Chen…☆28Updated 2 years ago
- [DATE 2023] Pipe-BD: Pipelined Parallel Blockwise Distillation☆11Updated 2 years ago
- [NeurIPS 2024] BLAST: Block Level Adaptive Structured Matrix for Efficient Deep Neural Network Inference☆16Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation☆30Updated 11 months ago
- Practical low-rank gradient compression for distributed optimization: https://arxiv.org/abs/1905.13727☆149Updated last year
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated last year
- ACL 2023☆39Updated 2 years ago
- ☆43Updated last year
- Code for reproducing "AC/DC: Alternating Compressed/DeCompressed Training of Deep Neural Networks" (NeurIPS 2021)☆23Updated 4 years ago
- ☆35Updated 3 years ago