SamsungLabs / awesomeyaml
Utility library to help parsing, transforming and querying yaml-based configs
☆14Updated 11 months ago
Alternatives and similar repositories for awesomeyaml:
Users that are interested in awesomeyaml are comparing it to the libraries listed below
- A research library for pytorch-based neural network pruning, compression, and more.☆161Updated 2 years ago
- ☆190Updated 4 years ago
- The official implementation of TinyTrain [ICML '24]☆22Updated 9 months ago
- Zero-Cost Proxies for Lightweight NAS☆151Updated 2 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆103Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆49Updated 4 years ago
- Efficient LLM Inference Acceleration using Prompting☆47Updated 6 months ago
- Train ImageNet *fast* in 500 lines of code with FFCV☆142Updated 11 months ago
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆56Updated 3 years ago
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆47Updated last year
- Spartan is an algorithm for training sparse neural network models. This repository accompanies the paper "Spartan Differentiable Sparsity…☆24Updated 2 years ago
- ☆225Updated 9 months ago
- Code for CVPR paper: Computationally Budgeted Continual Learning: What Does Matter?☆18Updated last year
- ML model training for edge devices☆162Updated last year
- Code for the pubblication "Distilled Replay: Overcoming Forgetting through Synthetic Examples"☆13Updated 4 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated 8 months ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆33Updated 2 years ago
- PyTorch implementation of Mixer-nano (#parameters is 0.67M, originally Mixer-S/16 has 18M) with 90.83 % acc. on CIFAR-10. Training from s…☆32Updated 3 years ago
- A Continual Learning Library in PyTorch and JAX☆14Updated 2 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆89Updated 2 years ago
- PyTorch code for our CoLLAs-2022 paper "Online Continual Learning for Embedded Devices"☆13Updated 2 years ago
- The Research Tree - A playground for research at the intersection of Continual, Reinforcement, and Self-Supervised Learning.☆193Updated last year
- Code repo for the paper BiT Robustly Binarized Multi-distilled Transformer☆106Updated last year
- Neural Architecture Search for Neural Network Libraries☆59Updated last year
- TPAMI 2021: NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size☆181Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated last year
- ☆203Updated 2 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated last year
- Macro Neural Architecture Search Benchmark☆16Updated 2 years ago