morning-hao / domain-self-instructLinks
受到self-instruct启发,除了通用LLM还能做垂直领域的小LLM实现定制效果,通过GPT获得question和answer来作为训练数据
☆15Updated 2 years ago
Alternatives and similar repositories for domain-self-instruct
Users that are interested in domain-self-instruct are comparing it to the libraries listed below
Sorting:
- Baichuan-13B 指令微调☆90Updated last year
- 本项目用于Embedding模型的相关实验,包括Embedding模型评估、Embedding模型微调、Embedding模型量化等。☆52Updated 10 months ago
- 专注于中文领域大语言模型,落地到某个行业某个领域,成为一个行业大模型、公司级别或行业级别领域大模型。☆118Updated 3 months ago
- ☆141Updated last year
- Viscacha:通用信息抽取数据集收集☆26Updated last year
- basic framework for rag(retrieval augment generation)☆84Updated last year
- baichuan LLM surpervised finetune by lora☆63Updated last year
- ☆110Updated 11 months ago
- 大模型文本分类☆62Updated 9 months ago
- LAiW: A Chinese Legal Large Language Models Benchmark☆80Updated 11 months ago
- ☆162Updated 2 years ago
- 怎么训练一个LLM分词器☆149Updated last year
- 使用 Qwen2ForSequenceClassification 简单实现文本分类任务。☆63Updated 11 months ago
- 阿里天池: 2023全球智能汽车AI挑战赛——赛道一:AI大模型检索问答 baseline 80+☆105Updated last year
- [ACL 2024] IEPile: A Large-Scale Information Extraction Corpus☆194Updated 4 months ago
- ChatGLM-6B添加了RLHF的实现,以及部分核心代码的逐行讲解 ,实例部分是做了个新闻短标题的生成,以及指定context推荐的RLHF的实现☆85Updated last year
- 中文 Instruction tuning datasets☆131Updated last year
- 阿里通义千问(Qwen-7B-Chat/Qwen-7B), 微调/LORA/推理☆104Updated last year
- LLM for NER☆73Updated 10 months ago
- 用于大模型 RLHF 进行人工数据标注排序的工具。A tool for manual response data annotation sorting in RLHF stage.☆251Updated last year
- ☆63Updated 2 years ago
- Clustering and Ranking: Diversity-preserved Instruction Selection through Expert-aligned Quality Estimation☆79Updated 6 months ago
- 大语言模型指令调优工具(支持 FlashAttention)☆173Updated last year
- 本项目使用大语言模型(LLM)进行开放领域三元组抽取。☆26Updated last year
- llama,chatglm 等模型的微调☆89Updated 10 months ago
- ☆97Updated last year
- 实现了Baichuan-Chat微调,Lora、QLora等各种微调方式,一键运行。☆70Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆108Updated last year
- ☆69Updated last year
- 一个基于HuggingFace开发的大语言模型训练、测试工具。支持各模型的webui、终端预测,低参数量及全参数模型训练(预训练、SFT、RM、PPO、DPO)和融合、量化。☆217Updated last year