nuaa-nlp / paper-readingLinks
☆46Updated last month
Alternatives and similar repositories for paper-reading
Users that are interested in paper-reading are comparing it to the libraries listed below
Sorting:
- 揣摩研习社关注自然语言和信息检索前沿技术,解读热门科技论文,分享实用科研工具,挖掘人工智能冰山之下的学术和应用价值!☆37Updated 2 years ago
- self-adaptive in-context learning☆45Updated 2 years ago
- ☆32Updated 3 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆104Updated 2 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆61Updated 3 years ago
- ☆56Updated 2 years ago
- Group Meeting Record for Baobao Chang Group in Peking University☆26Updated 4 years ago
- [EMNLP 2023] C-STS: Conditional Semantic Textual Similarity☆73Updated last year
- ☆53Updated 3 years ago
- 本文旨在整理文本生成领域国内外工业界和企业家的研究者和研究机构。排名不分先后。更新中,欢迎大家补充☆50Updated 4 years ago
- 擂台赛3-大规模预训练调优比赛的示例代码与baseline实现☆37Updated 2 years ago
- The information of NLP PhD application in the world.☆37Updated 10 months ago
- ☆42Updated last year
- Tips for paper writing and researches 科技论文写作经验记录和总结☆135Updated 3 years ago
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆59Updated last year
- Dataset and baseline for Coling 2022 long paper (oral): "ConFiguRe: Exploring Discourse-level Chinese Figures of Speech"☆11Updated last year
- ACL'2023: Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning☆40Updated 2 years ago
- Paradigm shift in natural language processing☆42Updated 3 years ago
- A paper list of pre-trained language models (PLMs).☆81Updated 3 years ago
- Released code for our ICLR23 paper.☆65Updated 2 years ago
- ☆67Updated 3 years ago
- Resources for our ACL 2023 paper: Distilling Script Knowledge from Large Language Models for Constrained Language Planning☆36Updated last year
- 🩺 A collection of ChatGPT evaluation reports on various bechmarks.☆49Updated 2 years ago
- ☆64Updated 2 years ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Code for CascadeBERT, Findings of EMNLP 2021☆12Updated 3 years ago
- ☆73Updated 3 years ago
- [Findings of EMNLP22] From Mimicking to Integrating: Knowledge Integration for Pre-Trained Language Models☆19Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- ☆66Updated last year