metterian / korean_bert_scoreLinks
BERT score for text generation
☆12Updated 11 months ago
Alternatives and similar repositories for korean_bert_score
Users that are interested in korean_bert_score are comparing it to the libraries listed below
Sorting:
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- StrategyQA 데이터 세트 번역☆23Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆71Updated last year
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- ☆20Updated last year
- huggingface에 있는 한국어 데이터 세트☆34Updated last year
- ☆36Updated 2 years ago
- The most modern LLM evaluation toolkit☆70Updated last month
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- AutoRAG example about benchmarking Korean embeddings.☆42Updated last year
- Performs benchmarking on two Korean datasets with minimal time and effort.☆44Updated this week
- nanoRLHF: from-scratch journey into how LLMs and RLHF really work.☆38Updated last week
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆47Updated last year
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆66Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- ☆12Updated last year
- This is a hands-on for ML beginners to perform SimCSE step-by-step. Implemented both supervised SimCSE and unsupervisied SimCSE, and dist…☆22Updated 2 years ago
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆81Updated last year
- ☆19Updated 2 years ago
- ☆113Updated 5 months ago
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆26Updated 3 weeks ago
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- 1-Click is all you need.☆63Updated last year
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆31Updated 5 months ago
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆82Updated 4 months ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Updated 2 years ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated 2 years ago
- Naver Boostcamp AI Tech Stage 3 : MRC (Machine Reading Comprehension)☆10Updated 4 years ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated 4 months ago