richarddwang / hugdatafastLinks
The elegant integration of huggingface/nlp and fastai2 and handy transforms using pure huggingface/nlp
☆19Updated 5 years ago
Alternatives and similar repositories for hugdatafast
Users that are interested in hugdatafast are comparing it to the libraries listed below
Sorting:
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- TorchServe+Streamlit for easily serving your HuggingFace NER models☆33Updated 3 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transfor…☆47Updated 2 years ago
- ☆28Updated 2 years ago
- A minimal template for creating a pypi package☆49Updated 4 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+☆37Updated 4 years ago
- A queue service for quickly developing scripts that use all your GPUs efficiently☆88Updated 3 years ago
- SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batchi…☆35Updated last year
- Helper scripts and notes that were used while porting various nlp models☆48Updated 3 years ago
- [WIP] Behold, semantic-search, built over sentence-transformers to make it easy for search engineers to evaluate, optimise and deploy mod…☆15Updated 2 years ago
- Large Scale BERT Distillation☆33Updated 2 years ago
- ☆87Updated 3 years ago
- DEPRECATED--all functionality moved to nbdev☆15Updated 3 years ago
- classy is a simple-to-use library for building high-performance Machine Learning models in NLP.☆87Updated 3 weeks ago
- NLP Examples using the 🤗 libraries☆40Updated 4 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 5 years ago
- ☆30Updated 4 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- Simple dataset to dataloader library for pytorch☆33Updated 9 months ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- Knowledge Distillation Toolkit☆88Updated 5 years ago
- KitanaQA: Adversarial training and data augmentation for neural question-answering models☆56Updated 2 years ago
- ☆104Updated 4 years ago
- No Teacher BART distillation experiment for NLI tasks☆28Updated 5 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- GNES Hub ship AI/ML models as Docker containers and use Docker containers as plugins.☆34Updated 6 years ago
- Code for scaling Transformers☆26Updated 4 years ago
- A python library for highly configurable transformers - easing model architecture search and experimentation.☆49Updated 3 years ago
- The stand-alone training engine module for the ALOHA.eu project.☆15Updated 6 years ago
- Scripts to convert datasets from various sources to Hugging Face Datasets.☆57Updated 3 years ago