davidkim205 / translationLinks
☆12Updated last year
Alternatives and similar repositories for translation
Users that are interested in translation are comparing it to the libraries listed below
Sorting:
- 1-Click is all you need.☆62Updated last year
- ☆20Updated last year
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆66Updated last year
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- BERT score for text generation☆12Updated 7 months ago
- ☆35Updated last year
- ☆15Updated 2 years ago
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆20Updated 2 months ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- evolve llm training instruction, from english instruction to any language.☆119Updated last year
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated last week
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- huggingface에 있는 한국어 데이터 세트☆29Updated 10 months ago
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆80Updated last month
- StrategyQA 데이터 세트 번역☆22Updated last year
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆79Updated 3 weeks ago
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆45Updated 8 months ago
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆112Updated 2 months ago
- ☆12Updated 8 months ago
- ☆10Updated 2 years ago
- KoTAN: Korean Translation and Augmentation with fine-tuned NLLB☆23Updated last year
- ☆107Updated 2 years ago
- 한국어 의료 분야 특화 챗봇 프로젝트☆32Updated last year
- The most modern LLM evaluation toolkit☆70Updated last week
- ☆62Updated last month
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆19Updated this week
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated 2 years ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year