peilongchencc / docker_tutorialLinks
介绍docker、docker compose的使用。
☆21Updated last year
Alternatives and similar repositories for docker_tutorial
Users that are interested in docker_tutorial are comparing it to the libraries listed below
Sorting:
- 通用简单工具项目☆22Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- Python implementation of AI-powered research assistant that performs iterative, deep research on any topic by combining search engines, w…☆49Updated 10 months ago
- 通用版面分析 | 中文文档解析 |Document Layout Analysis | layout paser☆48Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- accelerate generating vector by using onnx model☆18Updated 2 years ago
- bge推理优化相关脚本☆29Updated 2 years ago
- ☆41Updated 9 months ago
- 一个基于预训练的句向量生成工具☆138Updated 2 years ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆126Updated 11 months ago
- This repository provides an implementation of "A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction B…☆86Updated 6 months ago
- 集成Qwen与DeepSeek等先进大语言模型,支持纯LLM+分类层模式及LLM+LoRA+分类层模式,使用transformers模块化设计和训练便于根据需要调整或替换组件。☆19Updated 5 months ago
- code for piccolo embedding model from SenseTime☆145Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- ☆28Updated last year
- BLOOM 模型的指令微调☆24Updated 2 years ago
- 🌈 NERpy: Implementation of Named Entity Recognition using Python. 命名实体识别工具,支持BertSoftmax、BertSpan等模型,开箱即用。☆117Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆47Updated 3 months ago
- 基于python的BM25文本匹配算法实现☆33Updated 3 years ago
- (1)弹性区间标准化的旋转位置词嵌入编码器+peft LORA量化训练,提高万级tokens性能支持。(2)证据理论解释学习,提升模型的复杂逻辑推理能力(3)兼容alpaca数据格式。☆45Updated 2 years ago
- basic framework for rag(retrieval augment generation)☆86Updated 2 years ago
- The LLM of NL2GQL with NebulaGraph or Neo4j☆97Updated 2 years ago
- Ziya-LLaMA-13B是IDEA基于LLaMa的130亿参数的大规模预训练模型,具备翻译,编程,文本分类,信息抽取,摘要,文案生成,常识问答和数学计算等能力。目前姜子牙通用大模型已完成大规模预训练、多任务有监督微调和人类反馈学习三阶段的训练过程。本文主要用于Ziya-…☆46Updated 2 years ago
- 中文原生检索增强生成测评基准☆124Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆89Updated 2 years ago
- 文本智能校对大赛(Chinese Text Correction)的baseline☆67Updated 3 years ago
- 大模型预训练中文语料清洗及质量评估 Large model pre-training corpus cleaning☆75Updated last year
- ☆15Updated last year