simonlisiyu / llm_finetuneLinks
Web one-click mode full process platform, including train data upload, fine-tuning, model merge, model deploy, gpu monitor etc., no need python or shell development
☆19Updated last year
Alternatives and similar repositories for llm_finetune
Users that are interested in llm_finetune are comparing it to the libraries listed below
Sorting:
- large language model training-3-stages+deployment☆48Updated last year
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆38Updated 5 months ago
- SMP 2023 ChatGLM金融大模型挑战赛 60 分baseline思路介绍☆185Updated last year
- ☆66Updated 8 months ago
- DSPy中文文档☆27Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆108Updated last year
- A dataset template for guiding chat-models to self-cognition, including information about the model’s identity, capabilities, usage, limi…☆28Updated last year
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated 5 months ago
- 中文原生检索增强生成测评基准☆118Updated last year
- Imitate OpenAI with Local Models☆87Updated 9 months ago
- 基于chatglm快速搭建文档问答机器人☆88Updated 2 years ago
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆94Updated last year
- 大语言模型微调的项目,包含了使用QLora微调ChatGLM和LLama☆27Updated last year
- 大语言模型训练和服务调研☆37Updated last year
- Agentica: Effortlessly Build Intelligent, Reflective, and Collaborative Multimodal AI Agents! 轻松构建智能、具备反思能力、可协作的多模态AI Agent。☆168Updated last week
- 骆驼QA,中文大语言阅读理解模型。☆74Updated 2 years ago
- 学习开源chatGPT类模型的指南,汇总各种训练数据获取、模型微调、模型服务的方法,以及记录自己操作总遇到的各种常见坑,欢迎收藏、转发,希望能帮你省一些时间☆75Updated last year
- chatglm-6B for tools application using langchain☆75Updated 2 years ago
- bisheng-unstructured library☆48Updated 2 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 6 months ago
- 基于sentence transformers和chatglm实现的文档搜索工具☆154Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- (1)弹性区间标准化的旋转位置词嵌入编码器+peft LORA量化训练,提高万级tokens性能支持。(2)证据理论解释学习,提升模型的复杂逻辑推理能力(3)兼容alpaca数据格式。