mindspore-lab / mindrlhf
☆31Updated 2 months ago
Alternatives and similar repositories for mindrlhf:
Users that are interested in mindrlhf are comparing it to the libraries listed below
- ☆84Updated last year
- 怎么训练一个LLM分词器☆142Updated last year
- Finetuning LLaMA with RLHF (Reinforcement Learning with Human Feedback) based on DeepSpeed Chat☆113Updated last year
- A flexible and efficient training framework for large-scale alignment tasks☆322Updated 3 weeks ago
- 使用单个24G显卡,从0开始训练LLM☆50Updated 4 months ago
- NTK scaled version of ALiBi position encoding in Transformer.☆66Updated last year
- A visuailzation tool to make deep understaning and easier debugging for RLHF training.☆164Updated 3 weeks ago
- SuperCLUE-Agent: 基于中文原生任务的Agent智能体核心能力测评基准☆82Updated last year
- Implementation of Chinese ChatGPT☆287Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆116Updated last week
- ☆43Updated last year
- 大语言模型指令调优工具(支持 FlashAttention)☆171Updated last year
- ☆52Updated last year
- ☆156Updated this week
- ☆104Updated 4 months ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆106Updated last year
- 中文 Instruction tuning datasets☆128Updated 11 months ago
- 文本去重☆69Updated 9 months ago
- deepspeed+trainer简单高效实现多卡微调大模型☆123Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆213Updated last year
- ☆160Updated last year
- Inference code for LLaMA models☆114Updated last year
- ☆164Updated last year
- ☆141Updated 8 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆94Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- CLongEval: A Chinese Benchmark for Evaluating Long-Context Large Language Models☆40Updated last year
- 大模型多维度中文对齐评测基准 (ACL 2024)☆366Updated 6 months ago
- Baichuan2代码的逐行解析版本,适合小白☆212Updated last year