BAI-Yeqi / Statistical-Properties-of-Dot-ProductLinks
☆17Updated 4 years ago
Alternatives and similar repositories for Statistical-Properties-of-Dot-Product
Users that are interested in Statistical-Properties-of-Dot-Product are comparing it to the libraries listed below
Sorting:
- This package implements THOR: Transformer with Stochastic Experts.☆65Updated 4 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆127Updated last year
- ICLR 2022 Paper submission trend analysis from https://openreview.net/group?id=ICLR.cc/2022/Conference☆85Updated 3 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆98Updated 2 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆46Updated 3 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408☆198Updated 2 years ago
- Ladder Side-Tuning在CLUE上的简单尝试☆22Updated 3 years ago
- [Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter☆12Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- ☆142Updated last year
- A Tight-fisted Optimizer☆50Updated 2 years ago
- Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained…☆211Updated 2 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 3 years ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Updated 4 years ago
- [ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention☆196Updated 3 years ago
- A pytorch &keras implementation and demo of Fastformer.☆191Updated 3 years ago
- [KDD'22] Learned Token Pruning for Transformers☆102Updated 2 years ago
- ☆35Updated 4 years ago
- ☆33Updated 4 years ago
- Source code for the TMLR paper "Black-Box Prompt Learning for Pre-trained Language Models"☆57Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Updated 2 years ago
- ☆45Updated last month
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆54Updated 3 years ago
- Code for paper: “What Data Benefits My Classifier?” Enhancing Model Performance and Interpretability through Influence-Based Data Selecti…☆24Updated last year
- Mixture of Attention Heads☆51Updated 3 years ago
- A curated list of Early Exiting papers, benchmarks, and misc.☆119Updated 2 years ago
- Crawl & visualize ICLR papers and reviews.☆18Updated 3 years ago
- ☆18Updated last year
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆51Updated 3 years ago