modelscope / easydistillLinks
a toolkit on knowledge distillation for large language models
☆266Updated this week
Alternatives and similar repositories for easydistill
Users that are interested in easydistill are comparing it to the libraries listed below
Sorting:
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆510Updated last week
- a-m-team's exploration in large language modeling☆195Updated 8 months ago
- LLaMA Factory Document☆164Updated last week
- ☆234Updated last year
- ☆180Updated 9 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆238Updated 8 months ago
- ☆54Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆283Updated 11 months ago
- An automated pipeline for evaluating LLMs for role-playing.☆204Updated last year
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆85Updated last year
- [ACL 2024 Demo] Official GitHub repo for UltraEval: An open source framework for evaluating foundation models.☆255Updated last year
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆412Updated 9 months ago
- 本项目用于大模型数学解题能力方面的数据集合成,模型训练及评测,相关文章记录。☆100Updated last year
- ☆115Updated last year
- Mixture-of-Experts (MoE) Language Model☆195Updated last year
- ☆209Updated 3 months ago
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆240Updated 2 weeks ago
- ☆184Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Updated last year
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆223Updated 6 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆139Updated 7 months ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆260Updated last year
- 怎么训练一个LLM分词器☆153Updated 2 years ago
- ☆76Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- ☆51Updated last year
- The RedStone repository includes code for preparing extensive datasets used in training large language models.☆146Updated 2 weeks ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆58Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆48Updated last year
- code for piccolo embedding model from SenseTime☆145Updated last year