modelscope / easydistillLinks
a toolkit on knowledge distillation for large language models
☆249Updated 3 weeks ago
Alternatives and similar repositories for easydistill
Users that are interested in easydistill are comparing it to the libraries listed below
Sorting:
- ☆235Updated last year
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆484Updated this week
- LLaMA Factory Document☆161Updated 2 weeks ago
- a-m-team's exploration in large language modeling☆195Updated 7 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆204Updated last year
- ☆115Updated last year
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆408Updated 8 months ago
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆253Updated last year
- ☆179Updated 8 months ago
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆279Updated 11 months ago
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆85Updated last year
- Mixture-of-Experts (MoE) Language Model☆194Updated last year
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆238Updated 8 months ago
- ☆183Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- ☆76Updated 11 months ago
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆240Updated 3 weeks ago
- code for piccolo embedding model from SenseTime☆145Updated last year
- ☆50Updated last year
- Scaling Preference Data Curation via Human-AI Synergy☆135Updated 6 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆120Updated 8 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- ☆209Updated 2 months ago
- ☆39Updated 10 months ago
- ☆54Updated last year
- WritingBench: A Comprehensive Benchmark for Generative Writing☆155Updated last month
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 5 months ago
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- ☆93Updated 8 months ago
- This is a user guide for the MiniCPM and MiniCPM-V series of small language models (SLMs) developed by ModelBest. “面壁小钢炮” focuses on achi…☆297Updated 6 months ago