philschmid / optimum-transformers-optimizationsLinks
☆30Updated 2 years ago
Alternatives and similar repositories for optimum-transformers-optimizations
Users that are interested in optimum-transformers-optimizations are comparing it to the libraries listed below
Sorting:
- ☆20Updated 4 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- Ensembling Hugging Face transformers made easy☆63Updated 2 years ago
- exBERT on Transformers🤗☆10Updated 4 years ago
- Anh - LAION's multilingual assistant datasets and models☆27Updated 2 years ago
- Calculating Expected Time for training LLM.☆38Updated 2 years ago
- Developing tools to automatically analyze datasets☆74Updated 9 months ago
- Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.☆105Updated 3 years ago
- Convenient Text-to-Text Training for Transformers☆19Updated 3 years ago
- Using short models to classify long texts☆21Updated 2 years ago
- 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.☆82Updated 3 years ago
- Train 🤗transformers with DeepSpeed: ZeRO-2, ZeRO-3☆23Updated 4 years ago
- KeyPhraseTransformer lets you quickly extract key phrases, topics, themes from your text data with T5 transformer | Keyphrase extraction…☆104Updated last year
- Observe the slow deterioration of my mental sanity in the github commit history☆12Updated 2 years ago
- 🛠️ Tools for Transformers compression using PyTorch Lightning ⚡☆84Updated 8 months ago
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated 2 years ago
- **ARCHIVED** Filesystem interface to 🤗 Hub☆58Updated 2 years ago
- Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.☆82Updated 10 months ago
- A Framework aims to wisely initialize unseen subword embeddings in PLMs for efficient large-scale continued pretraining☆18Updated last year
- An instruction-based benchmark for text improvements.☆141Updated 2 years ago
- Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE☆18Updated 3 years ago
- 언어모델을 학습하기 위한 공개 한국어 instruction dataset들을 모아두었습니다.☆19Updated 2 years ago
- Our open source implementation of MiniLMv2 (https://aclanthology.org/2021.findings-acl.188)☆61Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- ☆13Updated 2 years ago
- Embedding Recycling for Language models☆39Updated 2 years ago
- Plug-and-play Search Interfaces with Pyserini and Hugging Face☆32Updated 2 years ago
- baikal.ai's pre-trained BERT models: descriptions and sample codes☆12Updated 4 years ago
- This repository contains the code for paper Prompting ELECTRA Few-Shot Learning with Discriminative Pre-Trained Models.☆48Updated 3 years ago
- GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embeddings☆43Updated last year