modelscope / easydistillLinks
a toolkit on knowledge distillation for large language models
☆127Updated last week
Alternatives and similar repositories for easydistill
Users that are interested in easydistill are comparing it to the libraries listed below
Sorting:
- ☆231Updated last year
- ☆49Updated last year
- ☆83Updated last year
- ☆53Updated 10 months ago
- code for piccolo embedding model from SenseTime☆134Updated last year
- ☆157Updated 3 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆66Updated 2 years ago
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆44Updated last year
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆111Updated 2 months ago
- Fantastic Data Engineering for Large Language Models☆89Updated 7 months ago
- ☆144Updated last year
- 怎么训练一个LLM分词器☆151Updated 2 years ago
- ☆97Updated last year
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆59Updated last year
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆82Updated 11 months ago
- ☆40Updated last year
- ☆288Updated 2 months ago
- LLaMA Factory Document☆146Updated 2 weeks ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆181Updated 2 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆253Updated 7 months ago
- ☆112Updated 8 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆240Updated 5 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆136Updated last year
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆255Updated 3 weeks ago
- Scaling Preference Data Curation via Human-AI Synergy☆95Updated last month
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- a-m-team's exploration in large language modeling☆181Updated 2 months ago
- Light local website for displaying performances from different chat models.☆87Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆200Updated last week