deep-diver / hf-daily-paper-newsletterLinks
Newsletter bot for 🤗 Daily Papers
☆130Updated this week
Alternatives and similar repositories for hf-daily-paper-newsletter
Users that are interested in hf-daily-paper-newsletter are comparing it to the libraries listed below
Sorting:
- A lightweight adjustment tool for smoothing token probabilities in the Qwen models to encourage balanced multilingual generation.☆97Updated 5 months ago
- 1-Click is all you need.☆63Updated last year
- Official repository for EXAONE 3.5 built by LG AI Research☆202Updated 11 months ago
- This project aims to automatically translate and summarize Huggingface's daily papers into Korean using ChatGPT.☆52Updated 7 months ago
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20Updated last year
- ☆60Updated 2 months ago
- IA3방식으로 KoAlpaca를 fine tuning한 한국어 LLM모델☆69Updated 2 years ago
- ☆69Updated last year
- ☆32Updated last year
- Official repository for EXAONE built by LG AI Research☆180Updated last year
- [KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model☆75Updated 3 months ago
- ☆64Updated 4 months ago
- ☆40Updated last year
- Full Stack SolarLLM Zero to All☆168Updated 9 months ago
- ☆36Updated last year
- 42dot LLM consists of a pre-trained language model, 42dot LLM-PLM, and a fine-tuned model, 42dot LLM-SFT, which is trained to respond to …☆130Updated last year
- This repository aims to develop CoT Steering based on CoT without Prompting. It focuses on enhancing the model’s latent reasoning capabil…☆114Updated 5 months ago
- hwpxlib 패키지 python에서 쉽게 사용 할수 있게 만든 github repo 입니다.☆35Updated 8 months ago
- Extension of Langchain for RAG. Easy benchmarking, multiple retrievals, reranker, time-aware RAG, and so on...☆284Updated last year
- ☆104Updated last year
- generate synthetic data for LLM fine-tuning in arbitrary situations within systematic way☆22Updated last year
- OpenOrca-KO dataset을 활용하여 llama2를 fine-tuning한 Korean-OpenOrca☆19Updated 2 years ago
- The most modern LLM evaluation toolkit☆70Updated last month
- Efficient fine-tuning for ko-llm models☆184Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.☆230Updated last year
- ☆112Updated 4 months ago
- ☆107Updated 2 years ago
- Make running benchmark simple yet maintainable, again. Now only supports Korean-based cross-encoder.☆24Updated last week
- LLM 모델의 외국어 토큰 생성을 막는 코드 구현☆81Updated 4 months ago
- ☆46Updated last year