fengwang / LLaMA-Factory-dockerLinks
☆25Updated last year
Alternatives and similar repositories for LLaMA-Factory-docker
Users that are interested in LLaMA-Factory-docker are comparing it to the libraries listed below
Sorting:
- A high-throughput and memory-efficient inference and serving engine for LLMs☆131Updated last year
- ggml implementation of the baichuan13b model (adapted from llama.cpp)☆55Updated 2 years ago
- ☆106Updated 2 years ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- qwen models finetuning☆105Updated 8 months ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- 大语言模型训练和服务调研☆36Updated 2 years ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];☆41Updated last year
- 中文原生检索增强生成测评基准☆123Updated last year
- A light proxy solution for HuggingFace hub.☆46Updated 2 years ago
- Open efforts to implement ChatGPT-like models and beyond.☆107Updated last year
- Imitate OpenAI with Local Models☆89Updated last year
- The framework of training large language models,support lora, full parameters fine tune etc, define yaml to start training/fine tune of y…☆30Updated last year
- This is a personal reimplementation of Google's Infini-transformer, utilizing a small 2b model. The project includes both model and train…☆58Updated last year
- 官方transformers源码解析。AI大模型时代,pytorch、transformer是新操作系统,其他都是运行在其上面的软件。☆17Updated 2 years ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运 行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- share data, prompt data , pretraining data☆36Updated last year
- 1.4B sLLM for Chinese and English - HammerLLM🔨☆43Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆68Updated 2 years ago
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆58Updated last year
- 中文原生工业测评基准☆15Updated last year
- SUS-Chat: Instruction tuning done right☆49Updated last year
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆67Updated 2 years ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆68Updated last year
- This project is mainly to explore what effect can be achieved by fine-tuning LLM model (ChatGLM-6B)of about 6B in vertical field (Romance…☆26Updated 2 years ago
- 大语言模型指令调优工具(支持 FlashAttention)☆178Updated last year
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆264Updated last year
- 百度QA100万数据集☆47Updated last year
- 怎么训练一个LLM分词器☆154Updated 2 years ago
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆47Updated last month