facebookresearch / NasRecLinks
NASRec Weight Sharing Neural Architecture Search for Recommender Systems
☆31Updated 2 years ago
Alternatives and similar repositories for NasRec
Users that are interested in NasRec are comparing it to the libraries listed below
Sorting:
- Official code for "Binary embedding based retrieval at Tencent"☆44Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆23Updated last year
- code for paper "Accessing higher dimensions for unsupervised word translation"☆22Updated 2 years ago
- ResiDual: Transformer with Dual Residual Connections, https://arxiv.org/abs/2304.14802☆97Updated 2 years ago
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆12Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- ☆34Updated 6 months ago
- Official implementation of "Active Image Indexing"☆60Updated 2 years ago
- Implementation of the Kalman Filtering Attention proposed in "Kalman Filtering Attention for User Behavior Modeling in CTR Prediction"☆59Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆38Updated 6 months ago
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- A dashboard for exploring timm learning rate schedulers☆19Updated last year
- Model compression for ONNX☆99Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- Utilities for Training Very Large Models☆58Updated last year
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆49Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆121Updated last year
- LoRA fine-tuned Stable Diffusion Deployment☆31Updated 2 years ago
- Code repository for the public reproduction of the language modelling experiments on "MatFormer: Nested Transformer for Elastic Inference…☆30Updated 2 years ago
- Library for the Test-based Calibration Error (TCE) metric to quantify the degree to classifier calibration.☆13Updated 2 years ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- Udacity CS344 Introduction to Parallell Programming (https://classroom.udacity.com/courses/cs344), with assignments/materials updated to …☆46Updated 4 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 5 months ago
- Tiled Flash Linear Attention library for fast and efficient mLSTM Kernels.☆79Updated last month
- clip retrieval benchmark☆17Updated 3 years ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆50Updated 3 years ago
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year