apd10 / universal_memory_allocationLinks
☆15Updated 3 years ago
Alternatives and similar repositories for universal_memory_allocation
Users that are interested in universal_memory_allocation are comparing it to the libraries listed below
Sorting:
- This is a Tensor Train based compression library to compress sparse embedding tables used in large-scale machine learning models such as …☆194Updated 3 years ago
- A Learnable LSH Framework for Efficient NN Training☆33Updated 4 years ago
- [ NeurIPS '22 ] Data distillation for recommender systems. Shows equivalent performance with 2-3 orders less data.☆23Updated 2 years ago
- Differentiable Product Quantization for End-to-End Embedding Compression.☆64Updated 2 years ago
- Implementation of vector quantization algorithms, codes for Norm-Explicit Quantization: Improving Vector Quantization for Maximum Inner P…☆59Updated 4 years ago
- High performance pytorch modules☆18Updated 2 years ago
- ☆14Updated 3 years ago
- Time-based Sequence Model for Personalization and Recommendation Systems☆49Updated 4 years ago
- A study of the downstream instability of word embeddings☆12Updated 3 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- AdamW optimizer for bfloat16 models in pytorch 🔥.☆38Updated last year
- ☆27Updated 6 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- ☆15Updated 3 years ago
- Code for COMET: Cardinality Constrained Mixture of Experts with Trees and Local Search☆11Updated 2 years ago
- Large Scale Graphical Model☆24Updated 6 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- Simple ranking metrics for PyTorch on CPU or GPU☆15Updated 5 years ago
- Ancestral Gumbel-Top-k Sampling☆25Updated 5 years ago
- [ICLR 2021] "UMEC: Unified Model and Embedding Compression for Efficient Recommendation Systems" by Jiayi Shen, Haotao Wang*, Shupeng Gui…☆39Updated 3 years ago
- sigma-MoE layer☆20Updated last year
- A collection of optimizers, some arcane others well known, for Flax.☆29Updated 4 years ago
- Confident Adaptive Transformers☆14Updated 4 years ago
- ☆32Updated last year
- Research and development for optimizing transformers☆131Updated 4 years ago
- Extremely simple and fast extreme multi-class and multi-label classifiers.☆70Updated 2 weeks ago
- Codes of the paper Deformable Butterfly: A Highly Structured and Sparse Linear Transform.☆13Updated 4 years ago
- Efficient LDA solution on GPUs.☆24Updated 7 years ago
- Official Pytorch Implementation for the paper 'SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients'☆17Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago