nuaa-nlp / paper-reading
☆46Updated 2 weeks ago
Alternatives and similar repositories for paper-reading:
Users that are interested in paper-reading are comparing it to the libraries listed below
- 揣摩研习社关注自然语言和信息检索前沿技术,解读热门科技论文,分享实用科研工具,挖掘人工智能冰山之下的学术和应用价值!☆37Updated 2 years ago
- self-adaptive in-context learning☆45Updated 2 years ago
- ☆33Updated 3 years ago
- my commonly-used tools☆53Updated 4 months ago
- ☆56Updated 2 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆60Updated 3 years ago
- Code for EMNLP 2021 main conference paper "Dynamic Knowledge Distillation for Pre-trained Language Models"☆40Updated 2 years ago
- ☆17Updated last year
- ☆61Updated 2 years ago
- ☆25Updated 2 years ago
- ☆53Updated 3 years ago
- 擂台赛3-大规模预训练调优比赛的示例代码与baseline实现☆38Updated 2 years ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆31Updated last year
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆39Updated last year
- Methods and evaluation for aligning language models temporally☆29Updated last year
- Enhances Overleaf by allowing article searches and BibTeX retrieval from DBLP and Google Scholar | 通过允许从 DBLP 和 Google Scholar 进行文章搜索和获取 …☆65Updated 3 weeks ago
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- Group Meeting Record for Baobao Chang Group in Peking University☆26Updated 3 years ago
- The information of NLP PhD application in the world.☆36Updated 8 months ago
- [ACL 2024 (Oral)] A Prospector of Long-Dependency Data for Large Language Models☆55Updated 9 months ago
- Released code for our ICLR23 paper.☆65Updated 2 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆103Updated 2 years ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- Ladder Side-Tuning在CLUE上的简单尝试☆20Updated 2 years ago
- MATCH-TUNING☆15Updated 2 years ago
- ☆16Updated 3 years ago
- ACL'2023: Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning☆41Updated 2 years ago
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year