peilongchencc / docker_tutorialLinks
介绍docker、docker compose的使用。
☆21Updated last year
Alternatives and similar repositories for docker_tutorial
Users that are interested in docker_tutorial are comparing it to the libraries listed below
Sorting:
- 通用简单工具项目☆22Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- Python implementation of AI-powered research assistant that performs iterative, deep research on any topic by combining search engines, w…☆49Updated 9 months ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- accelerate generating vector by using onnx model☆18Updated last year
- 通用版面分析 | 中文文档解析 |Document Layout Analysis | layout paser☆48Updated last year
- ☆40Updated 9 months ago
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆126Updated 10 months ago
- This repository provides an implementation of "A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction B…☆85Updated 6 months ago
- 大语言模型训练和服务调研☆37Updated 2 years ago
- 使用Qwen1.5-0.5B-Chat模型进行通用信息抽取任务的微调,旨在: 验证生成式方法相较于抽取式NER的效果; 为新手提供简易的模型微调流程,尽量减少代码量; 大模型训练的数据 格式处理。☆15Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- ☆28Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆89Updated 2 years ago
- 集成Qwen与DeepSeek等先进大语言模型,支持纯LLM+分类层模式及LLM+LoRA+分类层模式,使用transformers模块化设计和训练便于根据需要调整或替换组件。☆17Updated 4 months ago
- 大模型预训练中文语料清洗及质量评估 Large model pre-training corpus cleaning☆74Updated last year
- basic framework for rag(retrieval augment generation)☆86Updated 2 years ago
- 🌈 NERpy: Implementation of Named Entity Recognition using Python. 命名实体识别工具,支持BertSoftmax、BertSpan等模型,开箱即用。☆116Updated last year
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated last year
- 微调阿里开源的文字检测模型,利用合合识别返回的OCR结果作为初始训练数据,对模型进行优化训练,使其更加适应1万张图片的具体场景,提高文字识别的精度。☆10Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆47Updated 3 months ago
- BLOOM 模型的指令微调☆24Updated 2 years ago
- Ziya-LLaMA-13B是IDEA基于LLaMa的130亿参数的大规模预训练模型,具备翻译,编程,文本分类,信息抽取,摘要,文案生成,常识问答和数学计算等能力。目前姜子牙通用大模型已完成大规模预训练、多任务有监督微调和人类反馈学习三阶段的训练过程。本文主要用于Ziya-…☆45Updated 2 years ago
- 一个基于预训练的句向量生成工具☆138Updated 2 years ago
- TianGong-AI-Unstructure☆69Updated 3 months ago
- 中文原生检索增强生成测评基准☆122Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- 百度UIE抽取模型torch版训练预测框架☆12Updated last year
- 中文预训练ModernBert☆96Updated 9 months ago
- llama inference for tencentpretrain☆99Updated 2 years ago