huggingface / tune
β87Updated 2 years ago
Alternatives and similar repositories for tune:
Users that are interested in tune are comparing it to the libraries listed below
- π οΈ Tools for Transformers compression using PyTorch Lightning β‘β81Updated 3 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- β74Updated 3 years ago
- Shared code for training sentence embeddings with Flax / JAXβ27Updated 3 years ago
- Execute arbitrary SQL queries on π€ Datasetsβ32Updated last year
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.β126Updated 4 years ago
- Viewer for the π€ datasets library.β84Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ154Updated last year
- Helper scripts and notes that were used while porting various nlp modelsβ45Updated 2 years ago
- An asynchronous concurrent pipeline for classifying Common Crawl based on fastText's pipeline.β86Updated 3 years ago
- State of the art Semantic Sentence Embeddingsβ98Updated 2 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ134Updated last year
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)β101Updated 4 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β145Updated 3 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- LM Pretraining with PyTorch/TPUβ134Updated 5 years ago
- Techniques used to run BLOOM at inference in parallelβ37Updated 2 years ago
- Examples for aligning, padding and batching sequence labeling data (NER) for use with pre-trained transformer modelsβ65Updated 2 years ago
- A library to synthesize text datasets using Large Language Models (LLM)β151Updated 2 years ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β102Updated 2 years ago
- A diff tool for language modelsβ42Updated last year
- β47Updated 4 years ago
- β19Updated 2 years ago
- β21Updated 3 years ago
- An extensible framework for building visualization and annotation tools to enable better interaction with NLP and Artificial Intelligenceβ¦β49Updated 2 years ago
- A collection of resources on using BERT (https://arxiv.org/abs/1810.04805 ) and related Language Models in production environments.β96Updated 3 years ago
- Question-answers, collected from Googleβ126Updated 3 years ago
- No Teacher BART distillation experiment for NLI tasksβ27Updated 4 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 4 years ago
- BERT, RoBERTa fine-tuning over SQuAD Dataset using pytorch-lightningβ‘οΈ, π€-transformers & π€-nlp.β36Updated last year