β87Jun 2, 2022Updated 3 years ago
Alternatives and similar repositories for tune
Users that are interested in tune are comparing it to the libraries listed below
Sorting:
- MeCab model trained with OpenKorPos.β23Jun 19, 2022Updated 3 years ago
- exBERT on Transformersπ€β10Jun 14, 2021Updated 4 years ago
- Bias, Hate classification with KoELECTRA πΏβ27Jun 12, 2023Updated 2 years ago
- Unofficial implementation of Adaptive Input in PyTorchβ12Feb 22, 2019Updated 7 years ago
- β20Apr 28, 2021Updated 4 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.β21Nov 28, 2022Updated 3 years ago
- Prune a model while finetuning or training.β406Jun 21, 2022Updated 3 years ago
- Anh - LAION's multilingual assistant datasets and modelsβ27Apr 5, 2023Updated 2 years ago
- Personal information identification standardβ21Jan 24, 2024Updated 2 years ago
- reference pytorch code for named entity taggingβ87Oct 18, 2024Updated last year
- **ARCHIVED** Filesystem interface to π€ Hubβ59Apr 6, 2023Updated 2 years ago
- A utility for storing and reading files for Korean LM training πΎβ35Oct 15, 2025Updated 5 months ago
- KoGPT2 on Huggingface Transformersβ33May 4, 2021Updated 4 years ago
- π¦ νμ΄μ¬ νκΈ μ²λ¦¬ λΌμ΄λΈλ¬λ¦¬. Python Korean Morphological Analyzerβ19Feb 4, 2025Updated last year
- python project template for personal projects! πββοΈβ11Nov 28, 2020Updated 5 years ago
- Tokenizer λΉκ΅ μ€νβ11Jan 3, 2022Updated 4 years ago
- A repository to bind mecab for Python 3.5+. Not using swig nor pybind. (Not Maintained Now)β28May 21, 2021Updated 4 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ791Apr 24, 2023Updated 2 years ago
- β11Aug 12, 2020Updated 5 years ago
- Web archiving utility libraryβ11Mar 11, 2026Updated last week
- β30Nov 18, 2022Updated 3 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scaleβ157Dec 20, 2023Updated 2 years ago
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,332Mar 13, 2026Updated last week
- Scalable Attentive Sentence-Pair Modeling via Distilled Sentence Embedding (AAAI 2020) - PyTorch Implementationβ34Jul 25, 2023Updated 2 years ago
- This repository contains the corpora and supplementary data, along with instructions for recreating the experiments, for our paper: "End-β¦β90Feb 14, 2020Updated 6 years ago
- As good as new. How to successfully recycle English GPT-2 to make models for other languages (ACL Findings 2021)β48Aug 2, 2021Updated 4 years ago
- Binary Passage Retriever (BPR) - an efficient passage retriever for open-domain question answeringβ175Jun 6, 2021Updated 4 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,687Oct 23, 2024Updated last year
- β25Oct 28, 2020Updated 5 years ago
- Implementation of the paper 'Sentence Bottleneck Autoencoders from Transformer Language Models'β17Mar 14, 2022Updated 4 years ago
- Large Scale Distributed Model Training strategy with Colossal AI and Lightning AIβ56Sep 1, 2023Updated 2 years ago
- Large scale unannotated Korean corpus for unsupervised tasks. (e.g. Language modeling)β28Aug 11, 2019Updated 6 years ago
- λ¬Έμ₯λ¨μλ‘ λΆμ λ λ무μν€ λ°μ΄ν°μ . Releasesμμ λ€μ΄λ‘λ λ°κ±°λ, tfds-koreanμ ν΅ν΄ λ€μ΄λ‘λ λ°μΌμΈμ.β19Jun 16, 2021Updated 4 years ago
- FastFormers - highly efficient transformer models for NLUβ709Mar 21, 2025Updated last year
- Wronging a Right: Generating Better Errors to Improve Grammatical Error Detectionβ16Jan 2, 2019Updated 7 years ago
- Official implementation of the papers "GECToR β Grammatical Error Correction: Tag, Not Rewrite" (BEA-20) and "Text Simplification by Taggβ¦β957May 21, 2024Updated last year
- [HCLT 2022] Korean sentence text similarity dataset using naver shopping reviewβ25Oct 20, 2022Updated 3 years ago
- https://ailabs.enliple.com/β105Feb 25, 2021Updated 5 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Feb 9, 2023Updated 3 years ago