edumunozsala / RoBERTa_Encoder_Decoder_Product_NamesLinks
Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation
☆48Updated 4 years ago
Alternatives and similar repositories for RoBERTa_Encoder_Decoder_Product_Names
Users that are interested in RoBERTa_Encoder_Decoder_Product_Names are comparing it to the libraries listed below
Sorting:
- A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approa…☆99Updated 3 years ago
- A repo to explore different NLP tasks which can be solved using T5☆173Updated 4 years ago
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 3 years ago
- This repository contains the code, data, and models of the paper titled "XL-Sum: Large-Scale Multilingual Abstractive Summarization for 4…☆276Updated last year
- Tutorial for first time BERT users,☆103Updated 3 years ago
- Benchmarking various Deep Learning models such as BERT, ALBERT, BiLSTMs on the task of sentence entailment using two datasets - MultiNLI …☆28Updated 5 years ago
- This is where I put things I find useful that speed up my work with Machine Learning. Ever looked in your old projects to reuse those coo…☆263Updated 3 years ago
- Some notebooks for NLP☆207Updated 2 years ago
- Fine-tuning GPT-2 Small for Question Answering☆130Updated 3 years ago
- ☆60Updated 4 years ago
- This repository contains materials for the SIGIR 2022 tutorial on opinion summarization.☆33Updated 3 years ago
- Awesome Question Answering☆29Updated 3 years ago
- MobileBERT and DistilBERT for extractive summarization☆93Updated 2 years ago
- code for the paper "Zero-Shot Text Classification with Self-Training" for EMNLP 2022☆51Updated 3 months ago
- Efficient Attention for Long Sequence Processing☆98Updated 2 years ago
- A multi-purpose toolkit for table-to-text generation: web interface, Python bindings, CLI commands.☆57Updated last year
- Abstractive and Extractive Text summarization using Transformers.☆86Updated 2 years ago
- Fine-tuned BERT on SQuAd 2.0 Dataset. Applied Knowledge Distillation (KD) and fine-tuned DistilBERT (student) using BERT as the teacher m…☆26Updated 4 years ago
- The official code for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization☆156Updated 3 years ago
- ☆42Updated 4 years ago
- [NAACL 2021] This is the code for our paper `Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self…☆206Updated 3 years ago
- ICONIP2021 - A Vietnamese Medical Dataset for IC and NER☆25Updated 2 years ago
- The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)