VAIV-2023 / RLHF-Korean-Friendly-LLMLinks
Developing a Korean LLM model : Hate Speech Filtering, Improving conversational skills, Finetuning with the RLHF method
☆19Updated 3 months ago
Alternatives and similar repositories for RLHF-Korean-Friendly-LLM
Users that are interested in RLHF-Korean-Friendly-LLM are comparing it to the libraries listed below
Sorting:
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- huggingface에 있는 한국어 데이터 세트☆30Updated 11 months ago
- ☆60Updated this week
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- ☆18Updated 2 years ago
- The most modern LLM evaluation toolkit☆70Updated 3 weeks ago
- ☆20Updated last year
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆46Updated 8 months ago
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆30Updated 3 weeks ago
- Official repository for KoMT-Bench built by LG AI Research☆68Updated last year
- For the rlhf learning environment of Koreans☆25Updated last year
- [ACL 2024 Findings] Official PyTorch Implementation code for realizing the technical part of CoLLaVO: Crayon Large Language and Vision mO…☆98Updated last year
- evolve llm training instruction, from english instruction to any language.☆119Updated 2 years ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated 11 months ago
- BERT score for text generation☆12Updated 8 months ago
- 1-Click is all you need.☆62Updated last year
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- Difference-based Contrastive Learning for Korean Sentence Embeddings☆25Updated 2 years ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- [Google Meet] MLLM Arxiv Casual Talk☆52Updated 2 years ago
- MIRAGE is a light benchmark to evaluate RAG performance.☆15Updated 4 months ago
- Performs benchmarking on two Korean datasets with minimal time and effort.☆43Updated last month
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated 3 weeks ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated 2 years ago
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Updated last year
- ☆68Updated last year
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- Official implementation of "OffsetBias: Leveraging Debiased Data for Tuning Evaluators"☆25Updated last year
- StrategyQA 데이터 세트 번역☆22Updated last year