deep-diver / hllama
hllama is a library which aims to provide a set of utility tools for large language models.
☆10Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for hllama
- "Learning-based One-line intelligence Owner Network Connectivity Tool"☆15Updated last year
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆28Updated last year
- Difference-based Contrastive Learning for Korean Sentence Embeddings☆24Updated last year
- generate synthetic data for LLM fine-tuning in arbitrary situations within systematic way☆21Updated 7 months ago
- Calculating Expected Time for training LLM.☆38Updated last year
- Chain-of-thought 방식을 활용하여 llama2를 fine-tuning☆10Updated 11 months ago
- ☆18Updated last year
- Serving Example of CodeGen-350M-Mono-GPTJ on Triton Inference Server with Docker and Kubernetes☆20Updated last year
- ☆17Updated last year
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆46Updated last year
- exBERT on Transformers🤗☆10Updated 3 years ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated 2 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆27Updated 8 months ago
- ☆19Updated 3 years ago
- StrategyQA 데이터 세트 번역☆20Updated 6 months ago
- AskUp Search ChatGPT Plugin☆20Updated last year
- huggingface에 있는 한국어 데이터 세트☆21Updated last month
- Hate speech detection corpus in Korean, shared with EMNLP 2023 paper☆13Updated 6 months ago
- MeCab model trained with OpenKorPos.☆22Updated 2 years ago
- ☆23Updated last year
- 문장단위로 분절된 나무위키 데이터셋. Releases에서 다운로드 받거나, tfds-korean을 통해 다운로드 받으세요.☆19Updated 3 years ago
- Official repository for KoMT-Bench built by LG AI Research☆49Updated 3 months ago
- 어느 고등학생의 심플한 확률론적 앵무새 만들기☆19Updated last year
- ☆10Updated 10 months ago
- ☆26Updated last year
- The aim of this project is to publish and archive newsletters to a target email address.☆18Updated 9 months ago
- Official implementation of "OffsetBias: Leveraging Debiased Data for Tuning Evaluators"☆14Updated last month
- Beyond LM: How can language model go forward in the future?☆15Updated last year
- 언어모델을 학습하기 위한 공개 한국어 instruction dataset들을 모아두었습니다.☆19Updated last year