Doraemonzzz / Transformer-Evolution-PaperLinks
记录Transformer升级的论文笔记
☆19Updated 2 years ago
Alternatives and similar repositories for Transformer-Evolution-Paper
Users that are interested in Transformer-Evolution-Paper are comparing it to the libraries listed below
Sorting:
- Source code for our AAAI'22 paper 《From Dense to Sparse: Contrastive Pruning for Better Pre-trained Language Model Compression》☆25Updated 3 years ago
- ☆12Updated last year
- Methods and evaluation for aligning language models temporally☆30Updated last year
- Crawl & visualize ICLR papers and reviews.☆18Updated 3 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"☆54Updated 3 years ago
- Mixture of Attention Heads☆51Updated 3 years ago
- 😎 A simple and easy-to-use toolkit for GPU scheduling.☆45Updated 6 months ago
- ☆16Updated 2 years ago
- ☆18Updated last year
- code for promptCSE, emnlp 2022☆11Updated 2 years ago
- Policies of scientific publisher and conferences towards large language model (LLM), such as ChatGPT☆75Updated 2 years ago
- self-adaptive in-context learning☆45Updated 2 years ago
- [NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning☆33Updated 2 years ago
- [ACL 2023] Code for paper “Tailoring Instructions to Student’s Learning Levels Boosts Knowledge Distillation”(https://arxiv.org/abs/2305.…☆38Updated 2 years ago
- The official repo for the paper "Teacher Forcing Recovers Reward Functions for Text Generation"☆31Updated 2 years ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆40Updated last year
- 服务器 GPU 监控程序,当 GPU 属性满足预设条件时通过微信发送提示消息☆32Updated 4 years ago
- ☆14Updated 2 years ago
- ☆20Updated last year
- Use the tokenizer in parallel to achieve superior acceleration☆20Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated 2 years ago
- ☆21Updated 2 years ago
- ☆14Updated 3 years ago
- A probabilitic model for contextual word representation. Accepted to ACL2023 Findings.☆25Updated 2 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆63Updated 3 years ago
- Code associated with the paper: "Few-Shot Self-Rationalization with Natural Language Prompts"☆13Updated 3 years ago
- The collections of MOE (Mixture Of Expert) papers, code and tools, etc.☆12Updated last year