murray-z / OneStop_QAMakerLinks
采用一个模型同时实现问题生成和答案生成
☆29Updated 2 years ago
Alternatives and similar repositories for OneStop_QAMaker
Users that are interested in OneStop_QAMaker are comparing it to the libraries listed below
Sorting:
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆89Updated 2 years ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated 2 years ago
- llama信息抽取实战☆101Updated 2 years ago
- 🌈 NERpy: Implementation of Named Entity Recognition using Python. 命名实体识别工具,支持BertSoftmax、BertSpan等模型,开箱即用。☆116Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆110Updated 2 years ago
- 打造人人都会的NLP,开源不易,记得star哦☆101Updated 2 years ago
- 基于pytorch的百度UIE命名实体识别。☆56Updated 2 years ago
- TechGPT: Technology-Oriented Generative Pretrained Transformer☆227Updated 2 years ago
- ChatGLM-6B fine-tuning.☆137Updated 2 years ago
- 基于sentence transformers和chatglm实现的文档搜索工具☆157Updated 2 years ago
- 支持ChatGLM2 lora微调☆41Updated 2 years ago
- 微调ChatGLM☆128Updated 2 years ago
- An open-source and powerful Information Extraction toolkit based on GPT (GPT for Information Extraction; GPT4IE for short)。Note: we set a…☆175Updated 2 years ago
- LLM for NER☆81Updated last year
- moss chat finetuning☆51Updated last year
- Integrating ONgDB database into langchain ecosystem☆76Updated 2 years ago
- 一个基于预训练的句向量生成工具☆138Updated 2 years ago
- 基于词汇信息融合的中文NER模型☆170Updated 3 years ago
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆55Updated 2 years ago
- 各大文本摘要模型-中文文本可运行的解决方案☆69Updated 2 years ago
- benchmark of KgCLUE, with different models and methods☆28Updated 3 years ago
- ☆23Updated 2 years ago
- "桃李“: 国际中文教育大模型☆188Updated 2 years ago
- FAQ智能问答系统。实现FAQ的问题-模板匹配功能。部署轻量级的Web服务应用。☆65Updated last year
- SeqGPT: An Out-of-the-box Large Language Model for Open Domain Sequence Understanding☆226Updated last year
- 基于pytorch的中文意图识别和槽位填充☆202Updated 3 months ago
- 雅意信息抽取大模型:在百万级人工构造的高质量信息抽取数据上进行指令微调,由中科闻歌算法团队研发。 (Repo for YAYI Unified Information Extraction Model)☆315Updated last year
- deep training task☆30Updated 2 years ago
- GoGPT:基于Llama/Llama 2训练的中英文增强大模型|Chinese-Llama2☆79Updated 2 years ago
- Ziya-LLaMA-13B是IDEA基于LLaMa的130亿参数的大规模预训练模型,具备翻译,编程,文本分类,信息抽取,摘要,文案生成,常识问答和数学计算等能力。目前姜子牙通用大模型已完成大规模预训练、多任务有监督微调和人类反馈学习三阶段的训练过程。本文主要用于Ziya-…☆45Updated 2 years ago