facebookresearch / NasRecLinks
NASRec Weight Sharing Neural Architecture Search for Recommender Systems
☆31Updated 2 years ago
Alternatives and similar repositories for NasRec
Users that are interested in NasRec are comparing it to the libraries listed below
Sorting:
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- ☆34Updated 6 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆23Updated last year
- code for paper "Accessing higher dimensions for unsupervised word translation"☆22Updated 2 years ago
- Official code for "Binary embedding based retrieval at Tencent"☆44Updated last year
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆12Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- Utilities for Training Very Large Models☆58Updated last year
- Tools for content datamining and NLP at scale☆44Updated last year
- A dashboard for exploring timm learning rate schedulers☆19Updated last year
- Explorations into adversarial losses on top of autoregressive loss for language modeling☆38Updated last week
- [NeurIPS 2023] Sparse Modular Activation for Efficient Sequence Modeling☆40Updated 2 years ago
- Code repository for the public reproduction of the language modelling experiments on "MatFormer: Nested Transformer for Elastic Inference…☆30Updated 2 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Updated 3 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆59Updated 2 years ago
- Solution of Kaggle competition: Feedback Prize - Evaluating Student Writing☆16Updated 3 years ago
- DPO, but faster 🚀☆46Updated last year
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆96Updated 2 years ago
- ☆188Updated last year
- Timm model explorer☆42Updated last year
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆101Updated last year
- Model compression for ONNX☆99Updated last year
- ☆52Updated 2 years ago
- ViT trained on COYO-Labeled-300M dataset☆33Updated 3 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆121Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago