daekeun-ml / KoSimCSE-SageMaker
This is a hands-on for ML beginners to perform SimCSE step-by-step. Implemented both supervised SimCSE and unsupervisied SimCSE, and distributed training is possible with Amazon SageMaker.
☆23Updated last year
Alternatives and similar repositories for KoSimCSE-SageMaker:
Users that are interested in KoSimCSE-SageMaker are comparing it to the libraries listed below
- A collection of Korean NLP hands-on labs on Amazon SageMaker☆17Updated last year
- This hands-on lab guides you on how to easily train and deploy Korean NLP models in a cloud-native environment using SageMaker's Hugging …☆20Updated 2 years ago
- This hands-on lab walks you through a step-by-step approach to efficiently serving and fine-tuning large-scale Korean models on AWS infra…☆25Updated 11 months ago
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated 7 months ago
- This is a workshop designed for Amazon Bedrock a foundational model service.☆29Updated last year
- It is a voice bot based on LLM.☆10Updated last month
- Kor-IR: Korean Information Retrieval Benchmark☆79Updated 6 months ago
- AutoRAG example about benchmarking Korean embeddings.☆26Updated 3 months ago
- bpe based korean t5 model for text-to-text unified framework☆63Updated 9 months ago
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆11Updated 2 weeks ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆77Updated last year
- 한글 텍스트 임베딩 모델 리더보드☆53Updated 2 months ago
- Liner LLM Meetup archive☆72Updated 9 months ago
- ☆41Updated last year
- Korean Sentence Embedding Model Performance Benchmark for RAG☆46Updated 8 months ago
- AWS SageMaker를 이용한 MLOps와 LLMOps☆33Updated last year
- ☆69Updated last month
- huggingface에 있는 한국어 데이터 세트☆23Updated 3 months ago
- Official repository for KoMT-Bench built by LG AI Research☆51Updated 5 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆27Updated 10 months ago
- ☆63Updated 9 months ago
- StrategyQA 데이터 세트 번역☆22Updated 9 months ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆62Updated 3 months ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- NC NLP Techblog. NC의 NLP가 열어갈 도전과 변화를 소개합니다.☆21Updated last week
- It shows how to deploy and use an agent with LLM.☆12Updated 2 months ago
- ☆12Updated 6 months ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆68Updated last year
- NLP 역사부 터 서빙까지 한 권의 책에서 다룹니다.☆17Updated last month