baudzhou / MindsporeTrainer
Make Mindspore Training Easier
☆8Updated 2 years ago
Alternatives and similar repositories for MindsporeTrainer:
Users that are interested in MindsporeTrainer are comparing it to the libraries listed below
- ☆16Updated last year
- OpenAI-Parallel-Toolkit is a Python library for handling multiple OpenAI API keys and parallel tasks. It provides API key rotation, multi…☆74Updated 7 months ago
- A more efficient GLM implementation!☆55Updated 2 years ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 11 months ago
- ☆49Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆36Updated 2 months ago
- A light proxy solution for HuggingFace hub.☆46Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆37Updated 6 months ago
- Here is a demo for PDF parser (Including OCR, object detection tools)☆34Updated 5 months ago
- The paddle implementation of meta's LLaMA.☆45Updated last year
- “悟道”模型☆122Updated 3 years ago
- ☆56Updated last year
- ⚡ boost inference speed of GPT models in transformers by onnxruntime☆53Updated last year
- Evaluation for AI apps and agent☆36Updated last year
- 【丫丫】是以Moss作为基座模型,使用LoRA技术进行指令微调的尝试。由黄泓森,陈启源 @ 华中师范大学 主要完成。同时他也是【骆驼】开源中文大模型的一个子项目。☆30Updated last year
- 基于中文法律知识的ChatGLM指令微调☆44Updated last year
- Kanchil(鼷鹿)是世界上最小的偶蹄目动物,这个开源项目意在探索小模型(6B以下)是否也能具备和人类偏好对齐的能力。☆113Updated 2 years ago
- this repo is mnbvc text quality classification using fastText☆16Updated last year
- ipython notebooks do some sample experiments , make some idea☆14Updated last week
- kimi-chat 测试数据☆7Updated last year
- MultilingualShareGPT, the free multi-language corpus for LLM training☆73Updated last year
- ☆10Updated last year
- The source code and dataset mentioned in the paper Seal-Tools: Self-Instruct Tool Learning Dataset for Agent Tuning and Detailed Benchmar…☆46Updated 4 months ago
- An open-source conversational language model developed by the Knowledge Works Research Laboratory at Fudan University.☆64Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Light local website for displaying performances from different chat models.☆85Updated last year
- 演示 vllm 对中文大语言模型的神奇效果☆31Updated last year
- NTK scaled version of ALiBi position encoding in Transformer.☆67Updated last year
- ☆40Updated 5 months ago
- Leveraging large language models for text-to-SQL synthesis, this project fine-tunes WizardLM/WizardCoder-15B-V1.0 with QLoRA on a custom …☆43Updated last year