dnotitia / smoothie-qwenLinks
A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.
☆98Updated 5 months ago
Alternatives and similar repositories for smoothie-qwen
Users that are interested in smoothie-qwen are comparing it to the libraries listed below
Sorting:
- 1-Click is all you need.☆63Updated last year
- Official repository for EXAONE 3.5 built by LG AI Research☆202Updated last year
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆114Updated 6 months ago
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆82Updated 4 months ago
- ☆12Updated last year
- ☆20Updated last year
- ☆12Updated last year
- ☆64Updated 5 months ago
- Ko-Arena-Hard-Auto: An automatic LLM benchmark for Korean☆22Updated 8 months ago
- Official repository for KoMT-Bench built by LG AI Research☆71Updated last year
- ☆32Updated last year
- BERT score for text generation☆12Updated 11 months ago
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆26Updated 3 weeks ago
- ☆69Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.☆230Updated last year
- The most modern LLM evaluation toolkit☆70Updated last month
- Newsletter bot for 🤗 Daily Papers☆130Updated this week
- 자체 구축한 한국어 평가 데이터셋을 이용한 한국어 모델 평가☆31Updated last year
- 구글에서 발표한 Chain-of-Thought Reasoning without Prompting을 코드로 구현한 레포입니다.☆66Updated last year
- Official repository for EXAONE Deep built by LG AI Research☆402Updated 7 months ago
- [ACL 2025] DICE-BENCH: Evaluating the Tool-Use Capabilities of Large Language Models in Multi-Round, Multi-Party Dialogues☆25Updated 5 months ago
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated 2 years ago
- ☆103Updated 2 months ago
- nanoRLHF: from-scratch journey into how LLMs and RLHF really work.☆38Updated last week
- AutoRAG example about benchmarking Korean embeddings.☆42Updated last year
- A hackable, simple, and reseach-friendly GRPO Training Framework with high speed weight synchronization in a multinode environment.☆35Updated 4 months ago
- huggingface에 있는 한국어 데이터 세트☆34Updated last year
- KoCommonGEN v2: A Benchmark for Navigating Korean Commonsense Reasoning Challenges in Large Language Models☆25Updated last year
- Efficient fine-tuning for ko-llm models☆184Updated last year
- Here's how to use Lama3 for beginners and what services are being used.☆78Updated last year