modelscope / easydistillLinks
a toolkit on knowledge distillation for large language models
☆200Updated 2 weeks ago
Alternatives and similar repositories for easydistill
Users that are interested in easydistill are comparing it to the libraries listed below
Sorting:
- ☆235Updated last year
- LLaMA Factory Document☆154Updated 2 weeks ago
- code for piccolo embedding model from SenseTime☆143Updated last year
- a-m-team's exploration in large language modeling☆192Updated 5 months ago
- ☆115Updated last year
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆97Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆265Updated 9 months ago
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆84Updated last year
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆239Updated last week
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆393Updated 6 months ago
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆404Updated this week
- ☆49Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- ☆54Updated last year
- ☆301Updated 5 months ago
- ☆172Updated 6 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆233Updated 6 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆202Updated last year
- Scaling Preference Data Curation via Human-AI Synergy☆128Updated 4 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 6 months ago
- ☆180Updated 2 years ago
- ☆127Updated 6 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆256Updated 11 months ago
- SuperCLUE-Math6:新一代中文原生多轮多步数学推理数据集的探索之旅☆60Updated last year
- Mixture-of-Experts (MoE) Language Model☆192Updated last year
- ☆40Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 3 months ago
- Repo for "MaskSearch: A Universal Pre-Training Framework to Enhance Agentic Search Capability"☆146Updated 5 months ago
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆58Updated last year