daekeun-ml / azure-llm-fine-tuning
This hands-on walks you through fine-tuning an open source LLM on Azure and serving the fine-tuned model on Azure. It is intended for Data Scientists and ML engineers who have experience with fine-tuning but are unfamiliar with Azure ML.
☆12Updated 7 months ago
Alternatives and similar repositories for azure-llm-fine-tuning:
Users that are interested in azure-llm-fine-tuning are comparing it to the libraries listed below
- Performs benchmarking on two Korean datasets with minimal time and effort.☆27Updated 5 months ago
- This hands-on walks you through fine-tuning an open source LLM on Azure and serving the fine-tuned model on Azure. It is intended for Dat…☆39Updated 2 months ago
- A collection of Korean NLP hands-on labs on Amazon SageMaker☆17Updated last year
- AutoRAG example about benchmarking Korean embeddings.☆27Updated 3 months ago
- This lab is a 1-day/2-day end-to-end SLM workshop led and developed by AI GBB. Attendees will learn how to quickly and easily perform the…☆32Updated 2 weeks ago
- This hands-on lab guides you on how to easily train and deploy Korean NLP models in a cloud-native environment using SageMaker's Hugging …☆20Updated 2 years ago
- Official repository for KoMT-Bench built by LG AI Research☆51Updated 5 months ago
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆46Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated 8 months ago
- 카카오톡 GPT☆18Updated 9 months ago
- ☆41Updated last year
- ☆70Updated last month
- SageMaker Ployglot based RAG opensearch☆16Updated 9 months ago
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆72Updated 11 months ago
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated 8 months ago
- huggingface에 있는 한국어 데이터 세트☆23Updated 3 months ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆62Updated 4 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated 5 months ago
- AWS SageMaker를 이용한 MLOps와 LLMOps☆33Updated last year
- ☆64Updated 10 months ago
- generate synthetic data for LLM fine-tuning in arbitrary situations within systematic way☆21Updated 10 months ago
- Kor-IR: Korean Information Retrieval Benchmark☆79Updated 6 months ago
- ☆15Updated 4 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆27Updated 11 months ago
- It shows a problem solver based on agentic workflow.☆15Updated 2 months ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆77Updated last year
- bpe based korean t5 model for text-to-text unified framework☆63Updated 9 months ago
- This hands-on lab walks you through a step-by-step approach to efficiently serving and fine-tuning large-scale Korean models on AWS infra…☆25Updated 11 months ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆68Updated last year
- DeepL을 통한 한국 번역 자동화 코드☆12Updated last year