RUSH-LAB / SLIDELinks
☆470Updated 3 years ago
Alternatives and similar repositories for SLIDE
Users that are interested in SLIDE are comparing it to the libraries listed below
Sorting:
- Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"☆1,100Updated 4 years ago
- ☆74Updated last year
- ☆277Updated 2 years ago
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆238Updated this week
- A uniform interface to run deep learning models from multiple frameworks☆939Updated last year
- 10x faster matrix and vector operations☆2,498Updated 2 years ago
- Fast Block Sparse Matrices for Pytorch☆549Updated 4 years ago
- ☆771Updated last year
- Fork of TensorFlow accelerated by DirectML☆470Updated 11 months ago
- GPU implementation of a fast generalized ANS (asymmetric numeral system) entropy encoder and decoder, with extensions for lossless compre…☆347Updated 2 months ago
- Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.☆480Updated 10 months ago
- Haste: a fast, simple, and open RNN library☆333Updated 2 years ago
- Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository …☆106Updated 7 months ago
- A performant and modular runtime for TensorFlow☆759Updated 3 weeks ago
- Bagua Speeds up PyTorch☆884Updated last year
- PyTorch elastic training☆729Updated 3 years ago
- Example code and applications for machine learning on Graphcore IPUs☆329Updated last year
- Library for 8-bit optimizers and quantization routines.☆777Updated 3 years ago
- MADGRAD Optimization Method☆803Updated 7 months ago
- PyTorch, TensorFlow, JAX and NumPy — all of them natively using the same code☆695Updated 2 years ago
- A High Level API for Deep Learning in JAX☆476Updated 2 years ago
- Pytorch Lightning Distributed Accelerators using Ray☆214Updated last year
- Continuous builder and binary build scripts for pytorch☆354Updated 3 weeks ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆241Updated 2 years ago
- PyTorch interface for the IPU☆180Updated last year
- Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)☆492Updated 2 years ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆767Updated 2 years ago
- Tuplex is a parallel big data processing framework that runs data science pipelines written in Python at the speed of compiled code. Tupl…☆814Updated 3 weeks ago
- NumPy and SciPy on Multi-Node Multi-GPU systems☆928Updated this week
- A thin, highly portable toolkit for efficiently compiling dense loop-based computation.☆148Updated 2 years ago