Curt-Park / python-monorepo-template
Python monorepo template with Pants
☆20Updated last year
Alternatives and similar repositories for python-monorepo-template:
Users that are interested in python-monorepo-template are comparing it to the libraries listed below
- High-performance vector search engine with no loss of accuracy through GPU and dynamic placement☆28Updated last year
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated 8 months ago
- Serving Example of CodeGen-350M-Mono-GPTJ on Triton Inference Server with Docker and Kubernetes☆20Updated last year
- Tiny configuration for Triton Inference Server☆44Updated last week
- StrategyQA 데이터 세트 번역☆22Updated 9 months ago
- "Learning-based One-line intelligence Owner Network Connectivity Tool"☆15Updated last year
- AskUp Search ChatGPT Plugin☆20Updated last year
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆46Updated last year
- AWS SageMaker를 이용한 MLOps와 LLMOps☆33Updated last year
- 어느 고등학생의 심플한 확률론적 앵무새 만들기☆19Updated last year
- Chain-of-thought 방식을 활용하여 llama2를 fine-tuning☆10Updated last year
- AutoRAG example about benchmarking Korean embeddings.☆26Updated 3 months ago
- Distilling Task-Specific Knowledge from Teacher Model into BiLSTM☆32Updated last month
- hllama is a library which aims to provide a set of utility tools for large language models.☆10Updated 9 months ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- huggingface에 있는 한국어 데이터 세트☆23Updated 3 months ago
- generate synthetic data for LLM fine-tuning in arbitrary situations within systematic way☆21Updated 10 months ago
- 한국어 LLM 리더보드 및 모델 성능/안전성 관리☆22Updated last year
- 나만의 데이터로 만드는 ChatGPT(MyGPT) 강의 코드☆20Updated 7 months ago
- Reward Model을 이용하여 언어모델의 답변을 평가하기☆27Updated 10 months ago
- Polyglot을 활용한 image-text multimodal☆11Updated last year
- Telegram chatbot for ChatGPT that can be used personally☆12Updated last year
- 🔮 LLM GPU Calculator☆21Updated last year
- bpe based korean t5 model for text-to-text unified framework☆63Updated 9 months ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆125Updated 10 months ago
- ☆39Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆51Updated 5 months ago
- Construct a vector database through sentence embedding. And make your LLM respond based on this database.☆8Updated 11 months ago
- 금융 도메인에 특화된 한국어 임베딩 모델☆19Updated 5 months ago