modelscope / easydistillLinks
a toolkit on knowledge distillation for large language models
☆181Updated 2 weeks ago
Alternatives and similar repositories for easydistill
Users that are interested in easydistill are comparing it to the libraries listed below
Sorting:
- ☆169Updated 5 months ago
- ☆54Updated last year
- ☆49Updated last year
- Repo for "MaskSearch: A Universal Pre-Training Framework to Enhance Agentic Search Capability"☆146Updated 5 months ago
- a-m-team's exploration in large language modeling☆189Updated 4 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Scaling Preference Data Curation via Human-AI Synergy☆116Updated 3 months ago
- Deep Research☆102Updated 2 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 11 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆226Updated 5 months ago
- ☆234Updated last year
- ☆298Updated 4 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆115Updated 5 months ago
- ☆89Updated 5 months ago
- ☆74Updated 9 months ago
- We aim to provide the best references to search, select, and synthesize high-quality and large-quantity data for post-training your LLMs.☆60Updated last year
- ☆40Updated last year
- WritingBench: A Comprehensive Benchmark for Generative Writing☆124Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆137Updated last year
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆369Updated last week
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆44Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆218Updated 3 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 10 months ago
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆234Updated last month
- Mixture-of-Experts (MoE) Language Model☆189Updated last year
- code for piccolo embedding model from SenseTime☆142Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆260Updated 8 months ago
- ☆115Updated 11 months ago
- [ICML 2025] Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale☆263Updated 3 months ago
- ☆147Updated last year