learning-at-home / hivemindLinks
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
β2,187Updated 3 weeks ago
Alternatives and similar repositories for hivemind
Users that are interested in hivemind are comparing it to the libraries listed below
Sorting:
- Deploy a ML inference service on a budget in less than 10 lines of code.β1,343Updated last year
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,929Updated last week
- Hummingbird compiles trained ML models into tensor computation for faster inference.β3,441Updated last month
- Your PyTorch AI Factory - Flash enables you to easily configure and run complex AI recipes for over 15 tasks across 7 data domainsβ1,742Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,568Updated last year
- PyTorch extensions for high performance and large scale training.β3,328Updated last month
- Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller modelsβ2,137Updated last week
- Version control for machine learningβ1,664Updated 3 months ago
- Sparsity-aware deep learning inference runtime for CPUsβ3,147Updated this week
- The WeightWatcher tool for predicting the accuracy of Deep Neural Networksβ1,598Updated last week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,652Updated last year
- Enabling PyTorch on XLA Devices (e.g. Google TPU)β2,613Updated this week
- 10x faster matrix and vector operationsβ2,486Updated 2 years ago
- NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale daβ¦β1,084Updated 9 months ago
- Aim π« β An easy-to-use & supercharged open-source experiment tracker.β5,613Updated this week
- AIStore: scalable storage for AI applicationsβ1,519Updated this week
- Cramming the training of a (BERT-type) language model into limited compute.β1,332Updated 11 months ago
- Minimalistic large language model 3D-parallelism trainingβ1,898Updated this week
- β1,573Updated 2 years ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,014Updated 2 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,046Updated last year
- Mesh TensorFlow: Model Parallelism Made Easierβ1,607Updated last year
- πΈ Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloadingβ9,645Updated 8 months ago
- Open-AI's DALL-E for large scale training in mesh-tensorflow.β433Updated 3 years ago
- Thunder gives you PyTorch models superpowers for training and inference. Unlock out-of-the-box optimizations for performance, memory and β¦β1,357Updated this week
- JAX-based neural network libraryβ3,034Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (Nβ¦β4,640Updated 2 months ago
- Curated list of open source tooling for data-centric AI on unstructured data.β718Updated last year
- Minimalistic 4D-parallelism distributed training framework for education purposeβ1,518Updated this week
- An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed librariesβ7,203Updated this week