tianchiguaixia / ocr_recognitionLinks
微调阿里开源的文字检测模型,利用合合识别返回的OCR结果作为初始训练数据,对模型进行优化训练,使其更加适应1万张图片的具体场景,提高文字识别的精度。
☆10Updated 11 months ago
Alternatives and similar repositories for ocr_recognition
Users that are interested in ocr_recognition are comparing it to the libraries listed below
Sorting:
- 大语言模型训练和服务调研☆36Updated 2 years ago
- SearchGPT: Building a quick conversation-based search engine with LLMs.☆46Updated 10 months ago
- This repository provides an implementation of "A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction B…☆83Updated 4 months ago
- 通用版面分析 | 中文文档解析 |Document Layout Analysis | layout paser☆47Updated last year
- Python implementation of AI-powered research assistant that performs iterative, deep research on any topic by combining search engines, w…☆48Updated 7 months ago
- 介绍docker、docker compose的使用。☆21Updated last year
- 通用简单工具项目☆20Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- ☆15Updated last year
- the newest version of llama3,source code explained line by line using Chinese☆22Updated last year
- ☆27Updated last year
- ☆166Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- 百川Dynamic NTK-ALiBi的代码实现:无需微调即可推理更长文本☆49Updated 2 years ago
- ☆13Updated 7 months ago
- Qwen1.5-SFT(阿里, Ali), Qwen_Qwen1.5-2B-Chat/Qwen_Qwen1.5-7B-Chat微调(transformers)/LORA(peft)/推理☆68Updated last year
- 用于微调LLM的中文指令数据集☆28Updated 2 years ago
- accelerate generating vector by using onnx model☆18Updated last year
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆126Updated 8 months ago
- 基于Qwen2模型进行通用信息抽取【实体/关系/事件抽取】☆38Updated last year
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆47Updated last month
- ☆19Updated last year
- 多轮共情对话模型PICA☆96Updated 2 years ago
- 使用Qwen1.5-0.5B-Chat模型进行通用信息抽取任务的微调,旨在: 验证生成式方法相较于抽取式NER的效果; 为新手提供简易的模型微调流程,尽量减少代码量; 大模型训练的数据格式处理。☆15Updated last year
- baichuan LLM surpervised finetune by lora☆64Updated 2 years ago
- 中文原生检索增强生成测评基准☆123Updated last year
- 中文预训练ModernBert☆91Updated 7 months ago
- 视觉信息抽取任务中,使用OCR识别结果规范多模态大模型的回答☆41Updated 10 months ago
- 大模型预训练中文语料清洗及质量评估 Large model pre-training corpus cleaning☆70Updated last year