RUSH-LAB / SLIDELinks
☆470Updated 4 years ago
Alternatives and similar repositories for SLIDE
Users that are interested in SLIDE are comparing it to the libraries listed below
Sorting:
- Codebase for "SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems"☆1,104Updated 4 years ago
- ☆74Updated 2 years ago
- 10x faster matrix and vector operations☆2,507Updated 3 years ago
- Nod.ai 🦈 version of 👻 . You probably want to start at https://github.com/nod-ai/shark for the product and the upstream IREE repository …☆107Updated last week
- Tensors and Dynamic neural networks in Python with strong GPU acceleration☆246Updated this week
- A uniform interface to run deep learning models from multiple frameworks☆941Updated last year
- ☆279Updated 2 years ago
- Fast Block Sparse Matrices for Pytorch☆550Updated 4 years ago
- A performant and modular runtime for TensorFlow☆757Updated 2 months ago
- Bagua Speeds up PyTorch☆881Updated last year
- Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.☆482Updated last week
- ☆775Updated last year
- MADGRAD Optimization Method☆804Updated 10 months ago
- PyTorch, TensorFlow, JAX and NumPy — all of them natively using the same code☆699Updated 2 years ago
- PyTorch elastic training☆729Updated 3 years ago
- Haste: a fast, simple, and open RNN library☆334Updated 2 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,487Updated this week
- Mesh TensorFlow: Model Parallelism Made Easier☆1,624Updated 2 years ago
- Continuous builder and binary build scripts for pytorch☆356Updated 3 months ago
- A library for distributed ML training with PyTorch☆367Updated 2 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆251Updated 3 years ago
- common in-memory tensor structure☆1,106Updated last month
- A tensor-aware point-to-point communication primitive for machine learning☆275Updated 3 weeks ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆330Updated 2 years ago
- High-efficiency floating-point neural network inference operators for mobile, server, and Web☆2,184Updated this week
- tree is a library for working with nested data structures☆1,012Updated 10 months ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆242Updated 2 years ago
- Library for 8-bit optimizers and quantization routines.☆779Updated 3 years ago
- Code for Parameter Prediction for Unseen Deep Architectures (NeurIPS 2021)☆492Updated 2 years ago
- A profiling and performance analysis tool for machine learning☆449Updated this week