choijhyeok / python-hwpxlibLinks
hwpxlib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.
☆35Updated 8 months ago
Alternatives and similar repositories for python-hwpxlib
Users that are interested in python-hwpxlib are comparing it to the libraries listed below
Sorting:
- hwplib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.☆50Updated 8 months ago
- AutoRAG example about benchmarking Korean embeddings.☆41Updated last year
- ☆25Updated last year
- ☆64Updated 4 months ago
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆81Updated 4 months ago
- ☆40Updated last year
- Kor-IR: Korean Information Retrieval Benchmark☆88Updated last year
- ☆39Updated 8 months ago
- 한국어 심리 상담 데이터셋☆80Updated 2 years ago
- 카카오톡 GPT☆19Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- ☆112Updated 4 months ago
- It shows a problem solver based on agentic workflow.☆16Updated 9 months ago
- ☆69Updated last year
- [KO-Platy🥮] Korean-Open-platypus 를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated 3 months ago
- SKT A.X LLM 4.0☆144Updated 4 months ago
- ☆48Updated last year
- nanoRLHF: from-scratch journey into how LLMs and RLHF really work.☆36Updated last week
- 카카오뱅크 & 에프엔가이드에서 학습한 금융 도메인 특화 언어모델☆121Updated last year
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆114Updated 5 months ago
- Liner LLM Meetup archive☆71Updated last year
- 가짜연 9th 깃허브 잔디심기 Stockelper Multi Agent Backend Fastapi☆67Updated 10 months ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated last year
- The most modern LLM evaluation toolkit☆70Updated last month
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated 2 years ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated 2 years ago
- Claude-router is a best project for using open model in claude-code☆55Updated 3 months ago
- Performs benchmarking on two Korean datasets with minimal time and effort.☆45Updated this week
- 한국어 언어모델 다분야 사고력 벤치마크☆199Updated last year
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year