davidkim205 / noxLinks
Efficient fine-tuning for ko-llm models
☆182Updated last year
Alternatives and similar repositories for nox
Users that are interested in nox are comparing it to the libraries listed below
Sorting:
- 1-Click is all you need.☆62Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.☆224Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆78Updated last week
- Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...☆281Updated last year
- evolve llm training instruction, from english instruction to any language.☆118Updated last year
- manage histories of LLM applied applications☆91Updated last year
- ☆12Updated last year
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆19Updated 3 weeks ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆110Updated 3 weeks ago
- Newsletter bot for 🤗 Daily Papers☆125Updated this week
- ☆15Updated 2 years ago
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated last year
- ☆68Updated last year
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- [ICLR 2024 & NeurIPS 2023 WS] An Evaluator LM that is open-source, offers reproducible evaluation, and inexpensive to use. Specifically d…☆300Updated last year
- ☆20Updated 11 months ago
- Official repository for EXAONE 3.5 built by LG AI Research☆195Updated 7 months ago
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆75Updated last week
- Official repository for KoMT-Bench built by LG AI Research☆64Updated 11 months ago
- Here's how to use Lama3 for beginners and what services are being used.☆77Updated last year
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆66Updated 9 months ago
- generate synthetic data for LLM fine-tuning in arbitrary situations within systematic way☆22Updated last year
- Korean Multi-task Instruction Tuning☆158Updated last year
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated last year
- BERT score for text generation☆12Updated 6 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated 10 months ago
- Ko-Arena-Hard-Auto: An automatic LLM benchmark for Korean☆23Updated 2 months ago