lancopku / well-classified-examples-are-underestimatedLinks
Code for the AAAI 2022 publication "Well-classified Examples are Underestimated in Classification with Deep Neural Networks"
☆53Updated 2 years ago
Alternatives and similar repositories for well-classified-examples-are-underestimated
Users that are interested in well-classified-examples-are-underestimated are comparing it to the libraries listed below
Sorting:
- Code for EMNLP 2022 paper “Distilled Dual-Encoder Model for Vision-Language Understanding”☆30Updated 2 years ago
- Code for ACL 2022 paper "BERT Learns to Teach: Knowledge Distillation with Meta Learning".☆86Updated 2 years ago
- FlatNCE: A Novel Contrastive Representation Learning Objective☆90Updated 3 years ago
- This repository is an implementation for the loss function proposed in https://arxiv.org/pdf/2110.06848.pdf.☆115Updated 3 years ago
- Code for paper "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", ACL 2022☆61Updated 3 years ago
- A curated list of awesome Mix☆69Updated 2 years ago
- PyTorch implementation of PiCO https://arxiv.org/abs/2201.08984☆218Updated last year
- ☆73Updated 3 years ago
- [ICLR 2024]EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling(https://arxiv.org/abs/2310.04691)☆123Updated last year
- On the Effectiveness of Parameter-Efficient Fine-Tuning☆38Updated last year
- ☆156Updated 3 years ago
- ☆35Updated last year
- ☆46Updated last month
- Advances of few-shot learning, especially for NLP applications.☆30Updated 2 years ago
- Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning》☆61Updated 3 years ago
- CVPR 2022, Robust Contrastive Learning against Noisy Views☆83Updated 3 years ago
- Mixture of Attention Heads☆47Updated 2 years ago
- MixGen: A New Multi-Modal Data Augmentation☆124Updated 2 years ago
- This repo contains codes and instructions for baselines in the VLUE benchmark.☆41Updated 3 years ago
- ☆28Updated last year
- Source code of NeurIPS 2022 accepted paper "AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning"