nuaa-nlp / paper-readingLinks
☆45Updated 3 months ago
Alternatives and similar repositories for paper-reading
Users that are interested in paper-reading are comparing it to the libraries listed below
Sorting:
- self-adaptive in-context learning☆45Updated 2 years ago
- 揣摩研习社关注自然语言和信息检索前沿技术,解读热门科技论文,分享实用科研工具,挖掘人工智能冰山之下的学术和应用价值!☆37Updated 2 years ago
- ☆32Updated 3 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆105Updated 2 years ago
- Group Meeting Record for Baobao Chang Group in Peking University☆26Updated 4 years ago
- [EMNLP 2023] C-STS: Conditional Semantic Textual Similarity☆73Updated last year
- ☆43Updated 2 years ago
- ☆63Updated 2 years ago
- Tips for paper writing and researches 科技论文写作经验记录和总结☆136Updated 3 years ago
- ☆73Updated 3 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆125Updated last year
- Released code for our ICLR23 paper.☆65Updated 2 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆61Updated 3 years ago
- ☆57Updated 2 years ago
- Feeling confused about super alignment? Here is a reading list☆43Updated last year
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆59Updated 2 years ago
- A paper list of pre-trained language models (PLMs).☆81Updated 3 years ago
- 🩺 A collection of ChatGPT evaluation reports on various bechmarks.☆50Updated 2 years ago
- ACL'2023: Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning☆40Updated 2 years ago
- ☆53Updated 3 years ago
- Do Large Language Models Know What They Don’t Know?☆99Updated 10 months ago
- Repo for outstanding paper@ACL 2023 "Do PLMs Know and Understand Ontological Knowledge?"☆32Updated last year
- ☆67Updated 3 years ago
- 服务器 GPU 监控程序,当 GPU 属性满足预设条件时通过微信发送提示消息☆32Updated 4 years ago
- 擂台赛3-大规模预训练调优比赛的示例代码与baseline实现☆37Updated 2 years ago
- Code for "Small Models are Valuable Plug-ins for Large Language Models"☆131Updated 2 years ago
- [NeurIPS 2022] "A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models", Yuanxin Liu, Fandong Meng, Zheng Lin, Jiangnan Li…☆21Updated last year
- 本文旨在整理文本生成领域国内外工业界和企业家的研究者和研究机构。排名不分先后。更新中,欢迎大家补充☆51Updated 4 years ago
- Code for M4LE: A Multi-Ability Multi-Range Multi-Task Multi-Domain Long-Context Evaluation Benchmark for Large Language Models☆23Updated last year