facebookresearch / NasRecLinks
NASRec Weight Sharing Neural Architecture Search for Recommender Systems
☆31Updated 2 years ago
Alternatives and similar repositories for NasRec
Users that are interested in NasRec are comparing it to the libraries listed below
Sorting:
- Official code for "Binary embedding based retrieval at Tencent"☆44Updated last year
- ☆34Updated 7 months ago
- Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.☆13Updated last year
- Experimental scripts for researching data adaptive learning rate scheduling.☆22Updated 2 years ago
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆23Updated last year
- Official implementation of "Active Image Indexing"☆60Updated 2 years ago
- Model compression for ONNX☆98Updated last year
- This library empowers users to seamlessly port pretrained models and checkpoints on the HuggingFace (HF) hub (developed using HF transfor…☆85Updated this week
- Implementation of a Light Recurrent Unit in Pytorch☆49Updated last year
- Code for paper Rethinking the Data Annotation Process for Multi-view 3D Pose Estimation with Active Learning and Self-Training☆22Updated 2 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆49Updated 3 years ago
- Timm model explorer☆42Updated last year
- The Triton backend for the PyTorch TorchScript models.☆173Updated this week
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆50Updated 2 years ago
- Official PyTorch implementation of "LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging" (ICML 2024)☆31Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 6 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Exploration into the proposed "Self Reasoning Tokens" by Felipe Bonetto☆57Updated last year
- A block oriented training approach for inference time optimization.☆34Updated last year
- ☆61Updated 2 years ago
- ☆52Updated 3 years ago
- code for paper "Accessing higher dimensions for unsupervised word translation"☆22Updated 2 years ago
- PostText is a QA system for querying your text data. When appropriate structured views are in place, PostText is good at answering querie…☆31Updated 2 years ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated 2 years ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆58Updated last week
- A dashboard for exploring timm learning rate schedulers☆19Updated last year
- The Triton backend for the ONNX Runtime.☆172Updated last week
- Some personal experiments around routing tokens to different autoregressive attention, akin to mixture-of-experts☆122Updated last year
- A very simple tool for situations where optimization with onnx-simplifier would exceed the Protocol Buffers upper file size limit of 2GB,…☆17Updated this week