EleutherAI / dpsLinks
Data processing system for polyglot
β91Updated last year
Alternatives and similar repositories for dps
Users that are interested in dps are comparing it to the libraries listed below
Sorting:
- π€ μ΅μνμ μΈν μΌλ‘ LMμ νμ΅νκΈ° μν μνμ½λβ58Updated 2 years ago
- β32Updated last year
- Easy Language Model Pretraining leveraging Huggingface's Transformers and Datasetsβ129Updated 2 years ago
- bpe based korean t5 model for text-to-text unified frameworkβ63Updated last year
- νκ΅μ΄ LLM 리λ보λ λ° λͺ¨λΈ μ±λ₯/μμ μ± κ΄λ¦¬β22Updated last year
- Korean Math Word Problemsβ59Updated 3 years ago
- [Google Meet] MLLM Arxiv Casual Talkβ52Updated 2 years ago
- Official repository for KoMT-Bench built by LG AI Researchβ64Updated 11 months ago
- β35Updated last year
- νκ΅μ΄ T5 λͺ¨λΈβ54Updated 3 years ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Modelsβ25Updated 10 months ago
- CareCall for Seniors: Role Specified Open-Domain Dialogue dataset generated by leveraging LLMs (NAACL 2022).β60Updated 3 years ago
- StrategyQA λ°μ΄ν° μΈνΈ λ²μβ22Updated last year
- Adversarial Test Dataset for Korean Multi-turn Response Selectionβ35Updated 3 years ago
- kogptλ₯Ό osloλ‘ νμΈνλνλ μμ .β23Updated 2 years ago
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Koreanβ45Updated 6 months ago
- β19Updated 2 years ago
- νκ΅μ΄ μΈμ΄ λͺ¨λΈ νμ΅μ μν νλ‘μ νΈ(Flax, Pytorch with Huggingface Accelerate)β32Updated last year
- Character-level Korean ELECTRA Model (μμ λ¨μ νκ΅μ΄ ELECTRA)β54Updated 2 years ago
- β28Updated 2 years ago
- KOLD: Korean Offensive Language Datasetβ81Updated 2 years ago
- A utility for storing and reading files for Korean LM training πΎβ36Updated last year
- β106Updated 2 years ago
- Korean-English Bilingual Electra Modelsβ110Updated 3 years ago
- μ체 ꡬμΆν νκ΅μ΄ νκ° λ°μ΄ν°μ μ μ΄μ©ν νκ΅μ΄ λͺ¨λΈ νκ°β31Updated last year
- β36Updated last year
- Performs benchmarking on two Korean datasets with minimal time and effort.β40Updated last month
- The code and models for "An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks" (AACL-IJCNLP 2020)β119Updated 4 years ago
- T5-base model for Koreanβ27Updated 4 years ago
- For the rlhf learning environment of Koreansβ23Updated last year