peilongchencc / docker_tutorialLinks
介绍docker、docker compose的使用。
☆20Updated 9 months ago
Alternatives and similar repositories for docker_tutorial
Users that are interested in docker_tutorial are comparing it to the libraries listed below
Sorting:
- 通用版面分析 | 中文文档解析 |Document Layout Analysis | layout paser☆46Updated 11 months ago
- 通用简单工具项目☆18Updated 7 months ago
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆108Updated last year
- accelerate generating vector by using onnx model☆17Updated last year
- BLOOM 模型的指令微调☆24Updated last year
- ☆35Updated last month
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- (1)弹性区间标准化的旋转位置词嵌入编码器+peft LORA量化训练,提高万级tokens性能支持。(2)证据理论解释学习,提升模型的复杂逻辑推理能力(3)兼容alpaca数据格式。☆44Updated last year
- GoGPT:基于Llama/Llama 2训练的中英文增强大模型|Chinese-Llama2☆78Updated last year
- Python implementation of AI-powered research assistant that performs iterative, deep research on any topic by combining search engines, w…☆45Updated 2 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- 🌈 NERpy: Implementation of Named Entity Recognition using Python. 命名实体识别工具,支持BertSoftmax、BertSpan等模型,开箱即用。☆113Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- baichuan LLM surpervised finetune by lora☆63Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆46Updated this week
- This repository provides an implementation of the paper "A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Co…☆69Updated 2 months ago
- 一个基于预训练的句向量生成工具☆136Updated 2 years ago
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- 基于sentence-transformers实现文本转向量的机器人☆46Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆47Updated last year
- ☆23Updated last year
- 文本去重☆72Updated last year
- 使用Qwen1.5-0.5B-Chat模型进行通用信息抽取任务的微调,旨在: 验证生成式方法相较于抽取式NER的效果; 为新手提供简易的模型微调流程,尽量减少代码量; 大模型训练的数据格式处理。☆12Updated 8 months ago
- large language model training-3-stages+deployment☆48Updated last year
- A Challenge on Dialog Systems with Retrieval Augmented Generation (FutureDial-RAG), Co-located with SLT2024 FutureDial-RAG Challenge☆11Updated 9 months ago
- bge推理优化相关脚本☆28Updated last year
- moss chat finetuning☆50Updated last year
- Imitate OpenAI with Local Models☆87Updated 9 months ago
- 该项目是为了使用layoutlmv3针对中文图片训练和推理。 其中主要解决三个问题: 1.数据标准化成可以的训练数据集格式 2.layoutlmv3-base-chinese 分词修改 2.超过512长度的文本切分和滑窗操作☆48Updated 8 months ago
- 支持ChatGLM2 lora微调☆40Updated last year