trailerAI / KoTANLinks
KoTAN: Korean Translation and Augmentation with fine-tuned NLLB
☆23Updated last year
Alternatives and similar repositories for KoTAN
Users that are interested in KoTAN are comparing it to the libraries listed below
Sorting:
- Gugugo: 한국어 오픈소스 번역 모델 프로젝트☆81Updated last year
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated last year
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- 한국어 언어모델 오픈소스☆82Updated 2 years ago
- ☆107Updated 2 years ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated last year
- Gunmo-emo-classification: 한국어 감정 다중 분류 모델 제작법☆29Updated last year
- ☆123Updated 2 years ago
- StrategyQA 데이터 세트 번역☆22Updated last year
- 한국어 중의성 해소 평가 데이터 세트☆50Updated last year
- 한국어 심리 상담 데이터셋☆79Updated 2 years ago
- Official repository for KoMT-Bench built by LG AI Research☆66Updated last year
- 🤗 최소한의 세팅으로 LM을 학습하기 위한 샘플코드☆58Updated 2 years ago
- 한국어 의료 분야 특화 챗봇 프로젝트☆32Updated last year
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- Kiwi 형태소 분석기를 활용한 딥러닝 언어 모델 실험실☆52Updated 2 years ago
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆80Updated last year
- ☆35Updated last year
- ☆68Updated last year
- ☆31Updated last year
- Yet another python binding for mecab-ko☆86Updated 2 years ago
- 한국어 언어모델 다분야 사고력 벤치마크☆194Updated 9 months ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- ☆147Updated 3 years ago
- AutoRAG example about benchmarking Korean embeddings.☆38Updated 10 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- 한국어 T5 모델☆54Updated 3 years ago
- Wikitext format dataset of Namuwiki (Most famous Korean wikipedia)☆51Updated 4 years ago
- KoRean based SBERT pre-trained models (KR-SBERT) for PyTorch☆100Updated 3 years ago
- ☆19Updated 4 years ago