BAI-Yeqi / Statistical-Properties-of-Dot-Product
☆16Updated 3 years ago
Alternatives and similar repositories for Statistical-Properties-of-Dot-Product:
Users that are interested in Statistical-Properties-of-Dot-Product are comparing it to the libraries listed below
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆84Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆51Updated 2 years ago
- This package implements THOR: Transformer with Stochastic Experts.☆61Updated 3 years ago
- 基于Gated Attention Unit的Transformer模型(尝鲜版)☆97Updated 2 years ago
- [Findings of ACL 2023] Communication Efficient Federated Learning for Multilingual Machine Translation with Adapter☆12Updated last year
- Crawl & visualize ICLR papers and reviews.☆18Updated 2 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆103Updated 2 years ago
- [EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer☆60Updated last year
- Mixture of Attention Heads☆44Updated 2 years ago
- A Tight-fisted Optimizer☆47Updated 2 years ago
- Official implementation of Privacy Implications of Retrieval-Based Language Models (EMNLP 2023). https://arxiv.org/abs/2305.14888☆35Updated 10 months ago
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆23Updated 3 years ago
- [ICML 2023] "Data Efficient Neural Scaling Law via Model Reusing" by Peihao Wang, Rameswar Panda, Zhangyang Wang☆14Updated last year
- some examples for drawing illustration plots for paper using seaborn package☆15Updated 5 years ago
- ☆33Updated 4 years ago
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆103Updated 2 years ago
- ☆33Updated 3 years ago
- Official Implementation for EMNLP 2024 (main) "AgentReview: Exploring Academic Peer Review with LLM Agent."☆49Updated 5 months ago
- ☆14Updated last year
- Code for the ACL-2022 paper "StableMoE: Stable Routing Strategy for Mixture of Experts"☆45Updated 2 years ago
- SCR: Training Graph Neural Networks with Consistency Regularization☆37Updated 2 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- The code and data for the paper JiuZhang3.0☆43Updated 11 months ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆121Updated last year
- Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation…☆32Updated last year
- ☆30Updated 7 months ago
- ☆98Updated 6 months ago
- ☆56Updated 2 years ago
- LongSpec: Long-Context Speculative Decoding with Efficient Drafting and Verification☆50Updated last month
- Converting Mixtral-8x7B to Mixtral-[1~7]x7B☆22Updated last year