LG-AI-EXAONE / KoMT-BenchLinks
Official repository for KoMT-Bench built by LG AI Research
☆66Updated last year
Alternatives and similar repositories for KoMT-Bench
Users that are interested in KoMT-Bench are comparing it to the libraries listed below
Sorting:
- The most modern LLM evaluation toolkit☆67Updated this week
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated 11 months ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- ☆35Updated last year
- huggingface에 있는 한국어 데이터 세트☆29Updated 9 months ago
- ☆101Updated last week
- Performs benchmarking on two Korean datasets with minimal time and effort.☆40Updated 2 months ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입 니다.☆66Updated 10 months ago
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- BERT score for text generation☆12Updated 6 months ago
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆45Updated 7 months ago
- StrategyQA 데이터 세트 번역☆22Updated last year
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆80Updated last year
- 한국어 T5 모델☆54Updated 3 years ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated last year
- ☆20Updated last year
- Benchmark in Korean Context☆135Updated last year
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆77Updated 3 weeks ago
- [Google Meet] MLLM Arxiv Casual Talk☆52Updated 2 years ago
- For the rlhf learning environment of Koreans☆23Updated last year
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Updated last year
- AutoRAG example about benchmarking Korean embeddings.☆38Updated 10 months ago
- 🤗 최소한의 세팅으로 LM을 학습하기 위한 샘플코드☆58Updated 2 years ago
- ☆32Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- ☆62Updated 2 weeks ago
- Data processing system for polyglot☆91Updated last year