Pipeline for pulling and processing online language model pretraining data from the web
β177Jul 31, 2023Updated 2 years ago
Alternatives and similar repositories for olm-datasets
Users that are interested in olm-datasets are comparing it to the libraries listed below
Sorting:
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.β96Feb 9, 2023Updated 3 years ago
- A utility for storing and reading files for Korean LM training πΎβ35Oct 15, 2025Updated 4 months ago
- Long-context pretrained encoder-decoder modelsβ96Oct 28, 2022Updated 3 years ago
- Korean Named Entity Corpusβ25May 12, 2023Updated 2 years ago
- This is project for korean auto spacingβ12Aug 3, 2020Updated 5 years ago
- kogptλ₯Ό osloλ‘ νμΈνλνλ μμ .β23Aug 26, 2022Updated 3 years ago
- β11Oct 3, 2021Updated 4 years ago
- All-in-one text de-duplicationβ744Updated this week
- β1,258Jul 30, 2024Updated last year
- Machine Generated Captions for Best Artworksβ22Sep 21, 2022Updated 3 years ago
- NSMC, KorSTS ... fine-tuningsβ18Feb 23, 2022Updated 4 years ago
- Convert Numerical Representations to Korean Pronunciationβ14Apr 20, 2020Updated 5 years ago
- Anh - LAION's multilingual assistant datasets and modelsβ27Apr 5, 2023Updated 2 years ago
- Adversarial Test Dataset for Korean Multi-turn Response Selectionβ34Dec 16, 2021Updated 4 years ago
- Code used for sourcing and cleaning the BigScience ROOTS corpusβ319Mar 20, 2023Updated 2 years ago
- Train π€transformers with DeepSpeed: ZeRO-2, ZeRO-3β23May 20, 2021Updated 4 years ago
- β184May 26, 2023Updated 2 years ago
- An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.β21Nov 28, 2022Updated 3 years ago
- An open collection of implementation tips, tricks and resources for training large language modelsβ498Mar 8, 2023Updated 2 years ago
- β357Mar 17, 2024Updated last year
- Code used for the creation of OBELICS, an open, massive and curated collection of interleaved image-text web documents, containing 141M dβ¦β211Aug 28, 2024Updated last year
- PyTorch + HuggingFace code for RetoMaton: "Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval" (ICML 2022), including anβ¦β286Oct 20, 2022Updated 3 years ago
- OSLO: Open Source framework for Large-scale model Optimizationβ309Aug 25, 2022Updated 3 years ago
- Korean Math Word Problemsβ59Jan 14, 2022Updated 4 years ago
- All-in-one repository for Fine-tuning & Pretraining (Large) Language Modelsβ15Mar 8, 2023Updated 2 years ago
- λͺ¨λμ λ§λμΉ λ°μ΄ν°λ₯Ό λΆμμ νΈλ¦¬ν ννλ‘ λ³ννλ κΈ°λ₯μ μ 곡ν©λλ€.β11Mar 2, 2022Updated 4 years ago
- Deploy KoGPT with Triton Inference Serverβ14Nov 18, 2022Updated 3 years ago
- Training HuggingFace models using fastaiβ11Jul 22, 2021Updated 4 years ago
- Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasksβ62Jan 22, 2022Updated 4 years ago
- Yet another python binding for mecab-koβ88May 16, 2023Updated 2 years ago
- T5-base model for Koreanβ27May 20, 2021Updated 4 years ago
- μ΄μ± ν΄μκΈ° based on ko-BARTβ29Mar 31, 2021Updated 4 years ago
- β23Jul 10, 2023Updated 2 years ago
- Efficient few-shot learning with Sentence Transformersβ2,688Dec 11, 2025Updated 2 months ago
- Calculating Expected Time for training LLM.β38Apr 17, 2023Updated 2 years ago
- A curated list of papers and resources for text-to-image evaluation.β30Sep 6, 2023Updated 2 years ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuningβ98Apr 26, 2023Updated 2 years ago
- β14May 3, 2022Updated 3 years ago
- baikal.ai's pre-trained BERT models: descriptions and sample codesβ12Jun 24, 2021Updated 4 years ago