UpstageAI / evalverseLinks
The Universe of Evaluation. All about the evaluation for LLMs.
☆228Updated last year
Alternatives and similar repositories for evalverse
Users that are interested in evalverse are comparing it to the libraries listed below
Sorting:
- ☆20Updated last year
- 1-Click is all you need.☆62Updated last year
- evolve llm training instruction, from english instruction to any language.☆118Updated 2 years ago
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆67Updated last year
- Efficient fine-tuning for ko-llm models☆182Updated last year
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆69Updated last year
- ☆108Updated 3 months ago
- The Universe of Data. All about data, data science, and data engineering☆564Updated last year
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated 2 months ago
- Forked repo from https://github.com/EleutherAI/lm-evaluation-harness/commit/1f66adc☆80Updated last year
- BERT score for text generation☆12Updated 9 months ago
- The most modern LLM evaluation toolkit☆70Updated last month
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...☆283Updated last year
- Performs benchmarking on two Korean datasets with minimal time and effort.☆43Updated 2 weeks ago
- AutoRAG example about benchmarking Korean embeddings.☆41Updated last year
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆21Updated 3 weeks ago
- Data processing system for polyglot☆92Updated 2 years ago
- ☆68Updated last year
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆80Updated 2 months ago
- 한국어 벤치마크 평가 코드 통합본(?)☆20Updated 11 months ago
- [ACL 2025] DICE-BENCH: Evaluating the Tool-Use Capabilities of Large Language Models in Multi-Round, Multi-Party Dialogues☆25Updated 3 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆31Updated 2 months ago
- [ICLR 2024 & NeurIPS 2023 WS] An Evaluator LM that is open-source, offers reproducible evaluation, and inexpensive to use. Specifically d…☆306Updated last year
- huggingface에 있는 한국어 데이터 세트☆30Updated last year
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆94Updated 3 months ago