jwj7140 / ko-medical-chatLinks
한국어 의료 분야 특화 챗봇 프로젝트
☆32Updated last year
Alternatives and similar repositories for ko-medical-chat
Users that are interested in ko-medical-chat are comparing it to the libraries listed below
Sorting:
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated last month
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated 2 years ago
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆80Updated last year
- 한국어 언어모델 오픈소스☆82Updated 2 years ago
- ☆68Updated last year
- Benchmark in Korean Context☆136Updated 2 years ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆80Updated 2 months ago
- The most modern LLM evaluation toolkit☆70Updated 2 weeks ago
- 한국어 언어모델 다분야 사고력 벤치마크☆197Updated 11 months ago
- 한국어 심리 상담 데이터셋☆78Updated 2 years ago
- Kor-IR: Korean Information Retrieval Benchmark☆89Updated last year
- ☆123Updated 2 years ago
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- Korean Sentence Embedding Model Performance Benchmark for RAG☆48Updated 8 months ago
- KURE: 고려대학교에서 개발한, 한국어 검색에 특화된 임베딩 모델☆187Updated last month
- ☆108Updated 2 months ago
- huggingface에 있는 한국어 데이터 세트☆30Updated last year
- Korean Multi-task Instruction Tuning☆158Updated last year
- ☆31Updated last year
- ☆107Updated 2 years ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- Gugugo: 한국어 오픈소스 번역 모델 프로젝트☆81Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆69Updated last year
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆113Updated 3 months ago
- Curation note of NLP datasets☆99Updated 2 years ago
- Official datasets and pytorch implementation repository of SQuARe and KoSBi (ACL 2023)☆245Updated 2 years ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated last year
- 한글 텍스트 임베딩 모델 리더보드☆92Updated 11 months ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year