Marker-Inc-Korea / CoT-llama2
Chain-of-thought 방식을 활용하여 llama2를 fine-tuning
☆10Updated 11 months ago
Related projects ⓘ
Alternatives and complementary repositories for CoT-llama2
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆28Updated last year
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆27Updated 8 months ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- "Learning-based One-line intelligence Owner Network Connectivity Tool"☆15Updated last year
- StrategyQA 데이터 세트 번역☆20Updated 7 months ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated 6 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated 2 months ago
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated 5 months ago
- huggingface에 있는 한국어 데이터 세트☆21Updated last month
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆46Updated last year
- Difference-based Contrastive Learning for Korean Sentence Embeddings