Marker-Inc-Korea / CoT-llama2Links
Chain-of-thought 방식을 활용하여 llama2를 fine-tuning
☆10Updated last year
Alternatives and similar repositories for CoT-llama2
Users that are interested in CoT-llama2 are comparing it to the libraries listed below
Sorting:
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆45Updated 7 months ago
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆29Updated 3 weeks ago
- It shows a problem solver based on agentic workflow.☆16Updated 5 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated 11 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- StrategyQA 데이터 세트 번역☆22Updated last year
- ☆12Updated 2 years ago
- Performs benchmarking on two Korean datasets with minimal time and effort.☆40Updated 2 months ago
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆66Updated 10 months ago
- hwpxlib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.☆34Updated 4 months ago
- 금융 도메인에 특화된 한국어 임베딩 모델☆20Updated last year
- "Learning-based One-line intelligence Owner Network Connectivity Tool"☆16Updated 2 years ago
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆47Updated 2 years ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated last year
- ☆32Updated last year
- huggingface에 있는 한국어 데이터 세트☆29Updated 9 months ago
- Difference-based Contrastive Learning for Korean Sentence Embeddings☆25Updated 2 years ago
- ☆35Updated last year
- For the rlhf learning environment of Koreans☆23Updated last year
- [Google Meet] MLLM Arxiv Casual Talk☆52Updated 2 years ago
- ☆20Updated last year
- Open Source + Multilingual MLLM + Fine-tuning + Distillation + More efficient models and learning + ?☆18Updated 6 months ago
- ☆20Updated last year
- AutoRAG example about benchmarking Korean embeddings.☆38Updated 10 months ago
- ☆101Updated last week
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆66Updated last year
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Updated last year