modelscope / easydistillLinks
a toolkit on knowledge distillation for large language models
☆57Updated this week
Alternatives and similar repositories for easydistill
Users that are interested in easydistill are comparing it to the libraries listed below
Sorting:
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆65Updated 2 years ago
- ☆29Updated 9 months ago
- GRAIN: Gradient-based Intra-attention Pruning on Pre-trained Language Models☆19Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- large language model training-3-stages+deployment☆48Updated last year
- ☆40Updated last year
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆38Updated 5 months ago
- 大语言模型训练和服务调研☆37Updated last year
- Collection of model-centric MCP servers☆16Updated 2 weeks ago
- Imitate OpenAI with Local Models☆87Updated 9 months ago
- code for piccolo embedding model from SenseTime☆126Updated last year
- share data, prompt data , pretraining data☆36Updated last year
- GTS Engine: A powerful NLU Training System。GTS引擎(GTS-Engine)是一款开箱即用且性能强大的自然语言理解引擎,聚焦于小样本任务,能够仅用小样本就能自动化生产NLP模型。☆91Updated 2 years ago
- ☆15Updated last year
- 文本去重☆72Updated last year
- ☆44Updated 5 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆37Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- Official Repository for SIGIR2024 Demo Paper "An Integrated Data Processing Framework for Pretraining Foundation Models"☆80Updated 9 months ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆108Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆46Updated this week
- moss chat finetuning☆50Updated last year
- 千问14B和7B的逐行解释☆60Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Updated last year
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆57Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 5 months ago
- ☆87Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆372Updated this week
- 中文大语言模型评测第二期☆70Updated last year