deep-diver / janusLinks
generate synthetic data for LLM fine-tuning in arbitrary situations within systematic way
☆22Updated last year
Alternatives and similar repositories for janus
Users that are interested in janus are comparing it to the libraries listed below
Sorting:
- 1-Click is all you need.☆63Updated last year
- BERT score for text generation☆12Updated 10 months ago
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆31Updated 4 months ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated last year
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated 2 years ago
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- StrategyQA 데이터 세트 번역☆23Updated last year
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆29Updated last year
- ☆64Updated 4 months ago
- Performs benchmarking on two Korean datasets with minimal time and effort.☆43Updated last month
- ☆20Updated last year
- CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean☆47Updated 11 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- hwpxlib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.☆35Updated 8 months ago
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆48Updated 2 years ago
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆70Updated last year
- ☆36Updated 2 years ago
- ☆39Updated 8 months ago
- Chain-of-thought 방식을 활용하여 llama2를 fine-tuning☆10Updated 2 years ago
- ☆15Updated 2 years ago
- It shows a problem solver based on agentic workflow.☆16Updated 9 months ago
- huggingface에 있는 한국어 데이터 세트☆33Updated last year
- "Learning-based One-line intelligence Owner Network Connectivity Tool"☆16Updated 2 years ago
- AWS SageMaker를 이용한 MLOps와 LLMOps☆32Updated 2 years ago
- ☆10Updated last year
- nanoRLHF: from-scratch journey into how LLMs and RLHF really work.☆36Updated last week
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆24Updated 3 weeks ago