Marker-Inc-Korea / AutoRAG-example-korean-embedding-benchmarkLinks
AutoRAG example about benchmarking Korean embeddings.
☆38Updated 10 months ago
Alternatives and similar repositories for AutoRAG-example-korean-embedding-benchmark
Users that are interested in AutoRAG-example-korean-embedding-benchmark are comparing it to the libraries listed below
Sorting:
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆77Updated 3 weeks ago
- Kor-IR: Korean Information Retrieval Benchmark☆87Updated last year
- ☆101Updated last week
- Korean Sentence Embedding Model Performance Benchmark for RAG☆48Updated 6 months ago
- ☆68Updated last year
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆66Updated 10 months ago
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆80Updated last year
- The most modern LLM evaluation toolkit☆67Updated last week
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆66Updated last year
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆112Updated last month
- Liner LLM Meetup archive☆71Updated last year
- Performs benchmarking on two Korean datasets with minimal time and effort.☆40Updated 2 months ago
- 한국어 언어모델 다분야 사고력 벤치마크☆194Updated 9 months ago
- hwpxlib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.☆34Updated 4 months ago
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- hwplib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.☆50Updated 4 months ago
- KURE: 고려대학교에서 개발한, 한국어 검색에 특화된 임베딩 모델☆174Updated 2 weeks ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- huggingface에 있는 한국어 데이터 세트☆29Updated 9 months ago
- ☆35Updated last year
- Benchmark in Korean Context☆135Updated last year
- SKT A.X LLM 4.0☆127Updated 3 weeks ago
- ☆62Updated 2 weeks ago
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆20Updated last month
- ☆11Updated 7 months ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- BERT score for text generation☆12Updated 6 months ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated last year