apd10 / universal_memory_allocationLinks
☆15Updated 3 years ago
Alternatives and similar repositories for universal_memory_allocation
Users that are interested in universal_memory_allocation are comparing it to the libraries listed below
Sorting:
- A Learnable LSH Framework for Efficient NN Training☆32Updated 4 years ago
- A study of the downstream instability of word embeddings☆12Updated 3 years ago
- High performance pytorch modules☆18Updated 2 years ago
- [ NeurIPS '22 ] Data distillation for recommender systems. Shows equivalent performance with 2-3 orders less data.☆23Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Differentiable Product Quantization for End-to-End Embedding Compression.☆63Updated 2 years ago
- Implementation of vector quantization algorithms, codes for Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner P…☆59Updated 4 years ago
- AdamW optimizer for bfloat16 models in pytorch 🔥.☆36Updated last year
- Efficient LDA solution on GPUs.☆24Updated 7 years ago
- Large Scale Graphical Model☆24Updated 6 years ago
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆194Updated 3 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- ☆14Updated 3 years ago
- Ancestral Gumbel-Top-k Sampling☆25Updated 5 years ago
- ☆32Updated last year
- Extremely simple and fast extreme multi-class and multi-label classifiers.☆70Updated 5 months ago
- Code for COMET: Cardinality Constrained Mixture of Experts with Trees and Local Search☆11Updated 2 years ago
- Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.☆13Updated 3 years ago
- ☆15Updated 3 years ago
- ☆27Updated 5 years ago
- Joint Optimization of Cascade Ranking Models (WSDM 19)☆13Updated 3 years ago
- sigma-MoE layer☆20Updated last year
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 3 years ago
- Hyperparameter tuning via uncertainty modeling☆47Updated last year
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 4 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated last year
- "Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices", official implementation☆29Updated 7 months ago
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- A memory efficient DLRM training solution using ColossalAI☆106Updated 2 years ago