qwopqwop200 / GPTQ-for-KoAlpaca
☆14Updated last year
Alternatives and similar repositories for GPTQ-for-KoAlpaca:
Users that are interested in GPTQ-for-KoAlpaca are comparing it to the libraries listed below
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- StrategyQA 데이터 세트 번역☆22Updated 10 months ago
- 🤗 최소한의 세팅으로 LM을 학습하기 위한 샘플코드☆58Updated last year
- 한국어 언어모델 오픈소스☆82Updated last year
- ☆12Updated last year
- ☆10Updated last year
- ☆34Updated last year
- 금융 도메인에 특화된 한국어 임베 딩 모델☆20Updated 6 months ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆77Updated last year
- KoTAN: Korean Translation and Augmentation with fine-tuned NLLB☆24Updated last year
- bpe based korean t5 model for text-to-text unified framework☆63Updated 10 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆27Updated 11 months ago
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated 8 months ago
- ☆32Updated last year
- 한국어 의료 분야 특화 챗봇 프로젝트☆30Updated last year
- 한국어 심리 상담 데이터셋☆75Updated last year
- Gugugo: 한국어 오픈소스 번역 모델 프로젝트☆76Updated 10 months ago
- ☆19Updated 4 years ago
- Official repository for KoMT-Bench built by LG AI Research☆55Updated 6 months ago
- ☆32Updated last year
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆28Updated last year
- ☆124Updated last year
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆14Updated last month
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆46Updated 11 months ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆68Updated last year
- ☆19Updated 6 months ago
- Wikitext format dataset of Namuwiki (Most famous Korean wikipedia)☆51Updated 4 years ago
- ☆20Updated last year
- ☆41Updated last year
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆46Updated last year