webis-de / webis-tldr-17-corpusLinks
Code for constructing TLDR corpus from Reddit dataset
☆25Updated 3 years ago
Alternatives and similar repositories for webis-tldr-17-corpus
Users that are interested in webis-tldr-17-corpus are comparing it to the libraries listed below
Sorting:
- GenieNLP: A versatile codebase for any NLP task☆89Updated last year
- A question-answering dataset with a focus on subjective information☆45Updated last year
- StAtutory Reasoning Assessment☆14Updated 2 years ago
- Wikipedia based dataset to train relationship classifiers and fact extraction models☆25Updated 4 years ago
- Legal document similarity - Code, data, and models for the ICAIL 2021 paper "Evaluating Document Representations for Content-based Legal …☆32Updated 4 years ago
- 💫 SpaCy wrapper for ConceptNet 💫☆94Updated last year
- 💫 A spaCy package for Yohei Tamura's Rust tokenizations library☆31Updated last month
- 🤗 Disaggregators: Curated data labelers for in-depth analysis.☆66Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated last year
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated 2 years ago
- Code for Stage-wise Fine-tuning for Graph-to-Text Generation☆26Updated 2 years ago
- Official implementation of the paper "CoEdIT: Text Editing by Task-Specific Instruction Tuning" (EMNLP 2023)☆126Updated 9 months ago
- MultiCite code and data. Models are available on Huggingface.☆32Updated 3 years ago
- This repository contains code used for our Multi Sentence Inference NAACL'22 paper.☆12Updated 2 years ago
- ☆86Updated 3 months ago
- This is the code for loading the SenseBERT model, described in our paper from ACL 2020.☆45Updated 2 years ago
- Preprocessing and analysis for training SNOMED-CT concept embeddings from CORD-19 corpus☆15Updated last year
- TimeLMs: Diachronic Language Models from Twitter☆108Updated last year
- ☆90Updated 3 years ago
- Developing tools to automatically analyze datasets☆74Updated 8 months ago
- SWIM-IR is a Synthetic Wikipedia-based Multilingual Information Retrieval training set with 28 million query-passage pairs spanning 33 la…☆48Updated last year
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchi…☆33Updated last year
- Search through Facebook Research's PyTorch BigGraph Wikidata-dataset with the Weaviate vector search engine☆31Updated 3 years ago
- The NewSHead dataset is a multi-doc headline dataset used in NHNet for training a headline summarization model.☆37Updated 3 years ago
- Using short models to classify long texts☆21Updated 2 years ago
- ☆182Updated 2 years ago
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆73Updated last year
- No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval☆29Updated 2 years ago
- Hashformers is a framework for hashtag segmentation with Transformers and Large Language Models (LLMs).☆71Updated 10 months ago
- Seahorse is a dataset for multilingual, multi-faceted summarization evaluation. It consists of 96K summaries with human ratings along 6 q…☆88Updated last year