yk / huggingface-nlp-demoLinks
β46Updated 5 years ago
Alternatives and similar repositories for huggingface-nlp-demo
Users that are interested in huggingface-nlp-demo are comparing it to the libraries listed below
Sorting:
- NLP Examples using the π€ librariesβ41Updated 4 years ago
- On Generating Extended Summaries of Long Documentsβ78Updated 4 years ago
- KitanaQA: Adversarial training and data augmentation for neural question-answering modelsβ57Updated last year
- Fine-tune transformers with pytorch-lightningβ44Updated 3 years ago
- Viewer for the π€ datasets library.β84Updated 3 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 3 years ago
- π€ Transformers: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch.β68Updated 5 years ago
- Low-code pre-built pipelines for experiments with huggingface/transformers for Data Scientists in a rush.β16Updated 4 years ago
- β15Updated 4 years ago
- β19Updated 4 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ46Updated 3 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β103Updated 3 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+β38Updated 4 years ago
- Implementation, trained models and result data for the paper "Aspect-based Document Similarity for Research Papers" #COLING2020β62Updated last year
- TorchServe+Streamlit for easily serving your HuggingFace NER modelsβ33Updated 3 years ago
- β87Updated 3 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustnessβ70Updated 3 years ago
- Code for the paper: Saying No is An Art: Contextualized Fallback Responses for Unanswerable Dialogue Queriesβ19Updated 3 years ago
- Generate BERT vocabularies and pretraining examples from Wikipediasβ17Updated 5 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- Multitask Learning with Pretrained Transformersβ40Updated 4 years ago
- Code release for "A Time-Aware Transformer Based Model for Suicide Ideation Detection on Social Media", EMNLP 2020.β53Updated 4 years ago
- π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.β82Updated 3 years ago
- β34Updated 4 years ago
- Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding (AAAI 2020) - PyTorch Implementationβ32Updated last year
- ML Reproducibility Challenge 2020: Electra reimplementation using PyTorch and Transformersβ12Updated 4 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ155Updated last year
- Introduction to the recently released T5 model from the paper - Exploring the Limits of Transfer Learning with a Unified Text-to-Text Traβ¦β35Updated 5 years ago
- QED: A Framework and Dataset for Explanations in Question Answeringβ117Updated 3 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago