BAI-Yeqi / Statistical-Properties-of-Dot-Product
☆15Updated 3 years ago
Alternatives and similar repositories for Statistical-Properties-of-Dot-Product:
Users that are interested in Statistical-Properties-of-Dot-Product are comparing it to the libraries listed below
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated last year
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated last year
- [Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter☆12Updated last year
- A Tight-fisted Optimizer☆47Updated last year
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆80Updated last year
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 3 years ago
- Crawl & visualize ICLR papers and reviews.☆18Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆59Updated last year
- ☆14Updated last year
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆62Updated 3 years ago
- ☆32Updated 3 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- This pytorch package implements PLATON: Pruning Large Transformer Models with Upper Confidence Bound of Weight Importance (ICML 2022).☆43Updated 2 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆100Updated 2 years ago
- ICLR2023 - Tailoring Language Generation Models under Total Variation Distance☆21Updated 2 years ago
- Mixture of Attention Heads☆41Updated 2 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆119Updated 11 months ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 8 months ago
- Must-read papers on improving efficiency for pre-trained language models.☆102Updated 2 years ago
- ☆95Updated 4 months ago
- A pre-trained model with multi-exit transformer architecture.☆55Updated 2 years ago
- ☆46Updated last month
- ☆63Updated 4 years ago
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆33Updated last year
- Ladder Side-Tuning在CLUE上的简单尝试☆19Updated 2 years ago
- Two Stones Hit One Bird: Bilevel Positional Encoding for Better Length Extrapolation, ICML 2024☆21Updated 7 months ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆30Updated last year
- ☆33Updated 3 years ago