huggingface / tune
β87Updated 2 years ago
Alternatives and similar repositories for tune
Users that are interested in tune are comparing it to the libraries listed below
Sorting:
- π οΈ Tools for Transformers compression using PyTorch Lightning β‘β83Updated 6 months ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β93Updated 2 years ago
- Execute arbitrary SQL queries on π€ Datasetsβ32Updated last year
- β76Updated 3 years ago
- LM Pretraining with PyTorch/TPUβ134Updated 5 years ago
- Shared code for training sentence embeddings with Flax / JAXβ27Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ154Updated last year
- Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.β126Updated 4 years ago
- Viewer for the π€ datasets library.β84Updated 3 years ago
- Helper scripts and notes that were used while porting various nlp modelsβ46Updated 3 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)β61Updated last year
- This repository contains example code to build models on TPUsβ30Updated 2 years ago
- State of the art Semantic Sentence Embeddingsβ99Updated 2 years ago
- β47Updated 4 years ago
- A diff tool for language modelsβ42Updated last year
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.β102Updated 2 years ago
- β28Updated 2 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselinesβ136Updated last year
- An asynchronous concurrent pipeline for classifying Common Crawl based on fastText's pipeline.β86Updated 4 years ago
- β19Updated 2 years ago
- Language-agnostic BERT Sentence Embedding (LaBSE) Pytorch Modelβ21Updated 4 years ago
- PyTorch-IE: State-of-the-art Information Extraction in PyTorchβ77Updated this week
- β21Updated 3 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.β147Updated 3 years ago
- A lightweight but powerful library to build token indices for NLP tasks, compatible with major Deep Learning frameworks like PyTorch and β¦β51Updated 5 months ago
- β78Updated last year
- An extensible framework for building visualization and annotation tools to enable better interaction with NLP and Artificial Intelligenceβ¦β50Updated 2 years ago
- Tutorial to pretrain & fine-tune a π€ Flax T5 model on a TPUv3-8 with GCPβ58Updated 2 years ago
- Techniques used to run BLOOM at inference in parallelβ37Updated 2 years ago
- A π€-style implementation of BERT using lambda layers instead of self-attentionβ69Updated 4 years ago