corca-ai / evaluating-gpt-4o-on-CLIcKLinks
Evaluate gpt-4o on CLIcK (Korean NLP Dataset)
☆20Updated last year
Alternatives and similar repositories for evaluating-gpt-4o-on-CLIcK
Users that are interested in evaluating-gpt-4o-on-CLIcK are comparing it to the libraries listed below
Sorting:
- huggingface에 있는 한국어 데이터 세트☆30Updated last year
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆47Updated 10 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- The most modern LLM evaluation toolkit☆70Updated last month
- StrategyQA 데이터 세트 번역☆22Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- ☆40Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆69Updated last year
- BERT score for text generation☆12Updated 9 months ago
- [Google Meet] MLLM Arxiv Casual Talk☆52Updated 2 years ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- ☆60Updated last month
- [KO-Platy🥮] Korean-Open-platypus를 활용 하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated 2 months ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- Performs benchmarking on two Korean datasets with minimal time and effort.☆43Updated 2 weeks ago
- Liner LLM Meetup archive☆71Updated last year
- hwpxlib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.☆35Updated 7 months ago
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆31Updated 3 months ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated last year
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆48Updated 2 years ago
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Updated 2 years ago
- 한국어 심리 상담 데이터셋☆78Updated 2 years ago
- ☆64Updated 3 months ago
- ☆19Updated 2 years ago
- ☆36Updated 2 years ago
- Naver Boostcamp AI Tech Stage 3 : MRC (Machine Reading Comprehension)☆10Updated 4 years ago
- For the rlhf learning environment of Koreans☆25Updated 2 years ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated 2 years ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- ☆68Updated last year