Marker-Inc-Korea / RAGchainLinks
Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...
☆284Updated last year
Alternatives and similar repositories for RAGchain
Users that are interested in RAGchain are comparing it to the libraries listed below
Sorting:
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated 3 months ago
- 한국어 언어모델 다분야 사고력 벤치마크☆199Updated last year
- ☆123Updated 2 years ago
- ☆69Updated last year
- ☆112Updated 4 months ago
- ☆210Updated 2 years ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated last year
- ☆40Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated 2 years ago
- The Universe of Evaluation. All about the evaluation for LLMs.☆230Updated last year
- Korean Multi-task Instruction Tuning☆158Updated last year
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated 2 years ago
- AWS SageMaker를 이용한 MLOps와 LLMOps☆32Updated 2 years ago
- Full Stack SolarLLM Zero to All☆168Updated 9 months ago
- Kor-IR: Korean Information Retrieval Benchmark☆88Updated last year
- 한국어 언어모델 오픈소스☆82Updated 2 years ago
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆81Updated 4 months ago
- AutoRAG example about benchmarking Korean embeddings.☆41Updated last year
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆81Updated last year
- Gugugo: 한국어 오픈소스 번역 모델 프로젝트☆83Updated last year
- LINER PDF Chat Tutorial with ChatGPT & Pinecone☆48Updated 2 years ago
- TeddyNote Parser API Client Library for Python☆33Updated 8 months ago
- 한국어 심리 상담 데이터셋☆80Updated 2 years ago
- Korean Sentence Embedding Model Performance Benchmark for RAG☆49Updated 10 months ago
- 한글 텍스트 임베딩 모델 리더보드☆93Updated last year
- 한국어 사전학습 모델을 활용한 문장 임베딩☆205Updated 2 years ago
- Upstage api examples and guides☆185Updated this week
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆114Updated 5 months ago
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- 가짜연 9th 깃허브 잔디심기 Stockelper Multi Agent Backend Fastapi☆67Updated 11 months ago