glanceable-io / ordinal-log-loss
Repository of the COLING 2022 paper : Ordinal Log-Loss - A simple log-based loss function for ordinal text classification.
β27Updated last year
Alternatives and similar repositories for ordinal-log-loss:
Users that are interested in ordinal-log-loss are comparing it to the libraries listed below
- β28Updated 3 years ago
- π A statutory article retrieval dataset in French. (ACL 2022)β39Updated last year
- This repository contains a demonstrative implementation for pooling-based models, e.g., DeepPyramidion complementing our paper "Sparsifyiβ¦β14Updated 2 years ago
- mSimCSE: Multilingual SimCSEβ34Updated 2 years ago
- Implementation of Mixout with PyTorchβ74Updated 2 years ago
- ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Costβ40Updated last year
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.β46Updated 2 years ago
- Official Implementation of "DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization."β137Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paperβ52Updated last year
- A curated list of awesome datasets with human label variation (un-aggregated labels) in Natural Language Processing and Computer Vision, β¦β79Updated 9 months ago
- Code for EMNLP 2021 paper: "Is Everything in Order? A Simple Way to Order Sentences"β41Updated last year
- β57Updated 2 years ago
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting irβ¦β33Updated 3 months ago
- β93Updated 2 years ago
- Repo for Aspire - A scientific document similarity model based on matching fine-grained aspects of scientific papers.β51Updated last year
- β24Updated 2 years ago
- Efficient Attention for Long Sequence Processingβ91Updated last year
- A multilingual version of MS MARCO passage ranking datasetβ143Updated last year
- Hierarchical Attention Transformers (HAT)β46Updated last year
- What are the best Systems? New Perspectives on NLP Benchmarkingβ13Updated last year
- Implementation of Self-adjusting Dice Loss from "Dice Loss for Data-imbalanced NLP Tasks" paperβ107Updated 4 years ago
- A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaβ¦β93Updated 2 years ago
- PyTorch-IE: State-of-the-art Information Extraction in PyTorchβ77Updated 2 weeks ago
- β34Updated last year
- [WWW 2022] Topic Discovery via Latent Space Clustering of Pretrained Language Model Representationsβ87Updated 2 years ago
- β21Updated 3 years ago
- Scripts for document-level grammatical error correction.β16Updated 3 years ago
- Code associated with the paper "Entropy-based Attention Regularization Frees Unintended Bias Mitigation from Lists"β47Updated 2 years ago
- β47Updated last year
- Efficient Language Model Training through Cross-Lingual and Progressive Transfer Learningβ29Updated 2 years ago