dnotitia / smoothie-qwenLinks
A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.
☆80Updated last month
Alternatives and similar repositories for smoothie-qwen
Users that are interested in smoothie-qwen are comparing it to the libraries listed below
Sorting:
- 1-Click is all you need.☆62Updated last year
- ☆12Updated last year
- Official repository for EXAONE 3.5 built by LG AI Research☆200Updated 8 months ago
- ☆20Updated last year
- Official repository for KoMT-Bench built by LG AI Research☆66Updated last year
- BERT score for text generation☆12Updated 7 months ago
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆112Updated 2 months ago
- ☆62Updated last month
- [ACL 2025] DICE-BENCH: Evaluating the Tool-Use Capabilities of Large Language Models in Multi-Round, Multi-Party Dialogues☆23Updated last month
- Ko-Arena-Hard-Auto: An automatic LLM benchmark for Korean☆23Updated 4 months ago
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆20Updated 2 months ago
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현 한 레포입니다.☆67Updated 11 months ago
- ☆12Updated 8 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆79Updated 3 weeks ago
- Train GEMMA on TPU/GPU! (Codebase for training Gemma-Ko Series)☆48Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.☆227Updated last year
- Efficient fine-tuning for ko-llm models☆182Updated last year
- ☆32Updated 8 months ago
- Newsletter bot for 🤗 Daily Papers☆126Updated this week
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated last year
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆19Updated this week
- Official repository for EXAONE Deep built by LG AI Research☆401Updated 2 months ago
- The most modern LLM evaluation toolkit☆70Updated this week
- evolve llm training instruction, from english instruction to any language.☆119Updated last year
- ☆68Updated last year
- An extended project of the LLM Compiler paper, focusing on developing LLM-based Autonomous Agents.☆26Updated 10 months ago
- huggingface에 있는 한국어 데이터 세트☆29Updated 10 months ago