davidkim205 / translation
☆11Updated 5 months ago
Related projects: ⓘ
- 1-Click is all you need.☆58Updated 4 months ago
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆28Updated 3 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆24Updated 3 weeks ago
- ☆10Updated last year
- ☆15Updated last month
- ☆14Updated last year
- ☆32Updated 11 months ago
- 🤗 최소한의 세팅으로 LM을 학습하기 위한 샘플코드☆57Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆44Updated last month
- StrategyQA 데이터 세트 번역☆20Updated 5 months ago
- ☆32Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated 10 months ago
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆38Updated last week
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆45Updated 6 months ago
- bpe based korean t5 model for text-to-text unified framework☆62Updated 5 months ago
- evolve llm training instruction, from english instruction to any language.☆108Updated last year
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆26Updated 6 months ago
- 한국어 의료 분야 특화 챗봇 프로젝트☆26Updated 10 months ago
- ☆100Updated last year
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Updated 11 months ago
- For the rlhf learning environment of Koreans☆23Updated 11 months ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆67Updated last year
- [Google Meet] MLLM Arxiv Casual Talk☆54Updated last year
- ☆20Updated last year
- ☆19Updated 2 years ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆77Updated 10 months ago
- ☆12Updated last year
- generate synthetic data for LLM fine-tuning in arbitrary situations within systematic way☆21Updated 6 months ago
- KoTAN: Korean Translation and Augmentation with fine-tuned NLLB☆24Updated 8 months ago
- ☆13Updated this week