bminixhofer / gerpt2
German small and large versions of GPT2.
β20Updated 2 years ago
Alternatives and similar repositories for gerpt2:
Users that are interested in gerpt2 are comparing it to the libraries listed below
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- A Benchmark Dataset for Understanding Disfluencies in Question Answeringβ62Updated 3 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Updated 3 years ago
- Code for the paper: Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queriesβ19Updated 3 years ago
- A collection of scripts to preprocess ASR datasets and finetune language-specific Wav2Vec2 XLSR modelsβ31Updated 3 years ago
- Code and data for the IWSLT 2022 shared task on Formality Control for SLTβ21Updated last year
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.β80Updated 6 months ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Updated 3 years ago
- β75Updated 3 years ago
- This repository contains a demonstrative implementation for pooling-based models, e.g., DeepPyramidion complementing our paper "Sparsifyiβ¦β14Updated 2 years ago
- Generate BERT vocabularies and pretraining examples from Wikipediasβ18Updated 4 years ago
- codebase for the Text-based NP Enrichment (TNE) paperβ20Updated last year
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.β126Updated 4 years ago
- Dataset of sentences from Hindi stories tagged with different emotion tagsβ10Updated 5 years ago
- A lightweight but powerful library to build token indices for NLP tasks, compatible with major Deep Learning frameworks like PyTorch and β¦β51Updated 3 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Implementation of the paper "Fine-Tuning Transformers: Vocabulary Transfer" https://arxiv.org/pdf/2112.14569.pdfβ20Updated 3 years ago
- β11Updated 4 years ago
- BERT models for many languages created from Wikipedia textsβ33Updated 4 years ago
- classy is a simple-to-use library for building high-performance Machine Learning models in NLP.β86Updated 2 months ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β37Updated 4 years ago
- LTG-Bertβ31Updated last year
- Execute arbitrary SQL queries on π€ Datasetsβ32Updated last year
- GLADIS: A General and Large Acronym Disambiguation Benchmark (EACL 23)β15Updated 9 months ago
- Implementation of Marge, Pre-training via Paraphrasing, in Pytorchβ75Updated 4 years ago
- This repo contains a set of neural transducer, e.g. sequence-to-sequence model, focusing on character-level tasks.β74Updated last year
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 4 years ago
- Semantically Structured Sentence Embeddingsβ65Updated 5 months ago
- Implementation of the GBST block from the Charformer paper, in Pytorchβ116Updated 3 years ago
- β22Updated 3 years ago