Marker-Inc-Korea / KO-PlatypusLinks
[KO-Platy๐ฅฎ] Korean-Open-platypus๋ฅผ ํ์ฉํ์ฌ llama-2-ko๋ฅผ fine-tuningํ KO-platypus model
โ75Updated 4 months ago
Alternatives and similar repositories for KO-Platypus
Users that are interested in KO-Platypus are comparing it to the libraries listed below
Sorting:
- Benchmark in Korean Contextโ136Updated 2 years ago
- IA3๋ฐฉ์์ผ๋ก KoAlpaca๋ฅผ fine tuningํ ํ๊ตญ์ด LLM๋ชจ๋ธโ69Updated 2 years ago
- โ69Updated last year
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adcโ81Updated last year
- ํ๊ตญ์ด ์ธ์ด๋ชจ๋ธ ๋ค๋ถ์ผ ์ฌ๊ณ ๋ ฅ ๋ฒค์น๋งํฌโ199Updated last year
- Korean Multi-task Instruction Tuningโ157Updated 2 years ago
- โ123Updated 2 years ago
- LLM ๋ชจ๋ธ์ ์ธ๊ตญ์ด ํ ํฐ ์์ฑ์ ๋ง๋ ์ฝ๋ ๊ตฌํโ82Updated 4 months ago
- The most modern LLM evaluation toolkitโ70Updated last month
- Liner LLM Meetup archiveโ71Updated last year
- ํ๊ตญ์ด ์ฌ๋ฆฌ ์๋ด ๋ฐ์ดํฐ์ โ81Updated 2 years ago
- โ113Updated 5 months ago
- ํ๊ตญ์ด ์๋ฃ ๋ถ์ผ ํนํ ์ฑ๋ด ํ๋ก์ ํธโ32Updated 2 years ago
- ํ๊ตญ์ด ์ธ์ด๋ชจ๋ธ ์คํ์์คโ82Updated 2 years ago
- Korean Sentence Embedding Model Performance Benchmark for RAGโ49Updated 11 months ago
- โ107Updated 2 years ago
- โ40Updated 2 years ago
- โ31Updated 2 years ago
- Official datasets and pytorch implementation repository of SQuARe and KoSBi (ACL 2023)โ248Updated 2 years ago
- KoRean based SBERT pre-trained models (KR-SBERT) for PyTorchโ102Updated 3 years ago
- bpe based korean t5 model for text-to-text unified frameworkโ63Updated last year
- KURE: ๊ณ ๋ ค๋ํ๊ต์์ ๊ฐ๋ฐํ, ํ๊ตญ์ด ๊ฒ์์ ํนํ๋ ์๋ฒ ๋ฉ ๋ชจ๋ธโ198Updated 3 months ago
- SKT A.X LLM 4.0โ148Updated 5 months ago
- ๊ตฌ๊ธ์์ ๋ฐํํ Chain-of-Thought Reasoning without Prompting์ ์ฝ๋๋ก ๊ตฌํํ ๋ ํฌ์ ๋๋ค.โ66Updated last year
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the modelโs latent reasoning capabilโฆโ114Updated 6 months ago
- Kor-IR: Korean Information Retrieval Benchmarkโ87Updated last year
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to โฆโ130Updated last year
- Curation note of NLP datasetsโ99Updated 3 years ago
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)โ48Updated last year
- ๐ค ์ต์ํ์ ์ธํ ์ผ๋ก LM์ ํ์ตํ๊ธฐ ์ํ ์ํ์ฝ๋โ58Updated 2 years ago