ExpectationMax / simple_gpu_scheduler
Simple scheduler for running jobs on GPUs
☆173Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for simple_gpu_scheduler
- TBA☆75Updated 5 years ago
- More interactive weak supervision with FlyingSquid☆314Updated 4 years ago
- Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings☆76Updated 2 years ago
- Distributed Keras Engine, Make Keras faster with only one line of code.☆190Updated 5 years ago
- Minimal deep learning library written from scratch in Python, using NumPy/CuPy.☆119Updated 2 years ago
- A cleaner way to build neural networks for PyTorch.☆185Updated 5 years ago
- ☆363Updated last year
- 👩 Pytorch and Jax code for the Madam optimiser.☆50Updated 3 years ago
- Train ImageNet in 18 minutes on AWS☆126Updated 7 months ago
- ☆143Updated last year
- Search for scientific papers on the command line☆100Updated 2 months ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆317Updated last year
- PyTorch functions and utilities to make your life easier☆195Updated 3 years ago
- PyTorch implementation of L2L execution algorithm☆106Updated last year
- Unifying Python/C++/CUDA memory: Python buffered array ↔️ `std::vector` ↔️ CUDA managed memory☆81Updated 2 weeks ago
- A thin, highly portable toolkit for efficiently compiling dense loop-based computation.☆147Updated last year
- Extreme Classification in Log Memory via Count-Min Sketch☆46Updated 4 years ago
- Loss Patterns of Neural Networks☆82Updated 3 years ago
- Equi-normalization of Neural Networks☆115Updated 5 years ago
- Code for "Supermasks in Superposition"☆117Updated last year
- Flask-based package for monitoring utilisation of nVidia GPUs.☆155Updated 5 years ago
- arxiv_miner is a toolkit for mining research papers on CS ArXiv.☆124Updated 7 months ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆137Updated 4 years ago
- Lightweight interface to AWS☆47Updated 5 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆239Updated last year
- Official Pytorch implementation of "OmniNet: A unified architecture for multi-modal multi-task learning" | Authors: Subhojeet Pramanik, P…☆512Updated 4 years ago
- ☆76Updated 4 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆138Updated 4 years ago
- Codebase for Learning Invariances in Neural Networks☆94Updated 2 years ago