azadyasar / NeuralMachineTranslation
PyTorch implementation of NMT models along with custom tokenizers, models, and datasets
☆20Updated 2 years ago
Alternatives and similar repositories for NeuralMachineTranslation
Users that are interested in NeuralMachineTranslation are comparing it to the libraries listed below
Sorting:
- Repository for Multilingual-VQA task created during HuggingFace JAX/Flax community week.☆34Updated 3 years ago
- ☆44Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- A variant of Transformer-XL where the memory is updated not with a queue, but with attention☆48Updated 4 years ago
- Long-context pretrained encoder-decoder models☆94Updated 2 years ago
- Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"☆71Updated 2 years ago
- Google's BigBird (Jax/Flax & PyTorch) @ 🤗Transformers☆49Updated 2 years ago
- All my experiments with the various transformers and various transformer frameworks available☆14Updated 4 years ago
- ☆27Updated 5 months ago
- A Multilingual Replicable Instruction-Following Model☆93Updated last year
- ☆34Updated 4 years ago
- Reduce the size of pretrained Hugging Face models via vocabulary trimming.☆44Updated 2 years ago
- The PyTorch implementation of ReCoSa(the Relevant Contexts with Self-attention) for dialogue generation using the multi-head attention an…☆22Updated last year
- Official Implementation of "DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization."☆139Updated 2 years ago
- Code associated with the "Data Augmentation using Pre-trained Transformer Models" paper☆52Updated last year
- BLOOM+1: Adapting BLOOM model to support a new unseen language☆71Updated last year
- Semantic Parsing with text-to-text Transformers☆20Updated 4 years ago
- Source code for the GPT-2 story generation models in the EMNLP 2020 paper "STORIUM: A Dataset and Evaluation Platform for Human-in-the-Lo…☆39Updated last year
- Ensembling Hugging Face transformers made easy☆62Updated 2 years ago
- Fine-tuned BERT on SQuAd 2.0 Dataset. Applied Knowledge Distillation (KD) and fine-tuned DistilBERT (student) using BERT as the teacher m…☆25Updated 4 years ago
- pyTorch implementation of Recurrence over BERT (RoBERT) based on this paper https://arxiv.org/abs/1910.10781 and comparison with pyTorch …☆82Updated 2 years ago
- Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback☆95Updated last year
- This tool helps automatic generation of grammatically valid synthetic Code-mixed data by utilizing linguistic theories such as Equivalenc…☆54Updated 9 months ago
- ☆12Updated 4 years ago
- Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch☆45Updated 4 years ago
- benchmarks for evaluating MT models☆12Updated 10 months ago
- Define Transformers, T5 model and RoBERTa Encoder decoder model for product names generation☆48Updated 3 years ago
- BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightning⚡️, 🤗-transformers & 🤗-nlp.☆36Updated last year
- Vocabulary Trimming (VT) is a model compression technique, which reduces a multilingual LM vocabulary to a target language by deleting ir…☆37Updated 6 months ago
- ☆23Updated 11 months ago