zhihaiLLM / wisdomInterrogatoryLinks
☆523Updated last year
Alternatives and similar repositories for wisdomInterrogatory
Users that are interested in wisdomInterrogatory are comparing it to the libraries listed below
Sorting:
- 夫子•明察司法大模型是由山东大学、浪潮云、中国政法大学联合研发,以 ChatGLM 为大模型底座,基于海量中文无监督司法语料与有监督司法微调数据训练的中文司法大模型。该模型支持法条检索、案例分析、三段论推理判决以及司法对话等功能,旨在为用户提供全方位、高精准的法律咨询与解答…☆352Updated 3 weeks ago
- 中文法律LLaMA (LLaMA for Chinese legel domain)☆958Updated 11 months ago
- A Chinese medical ChatGPT based on LLaMa, training from large-scale pretrain corpus and multi-turn dialogue dataset.☆373Updated last year
- 活字通用大模型☆393Updated 11 months ago
- [中文法律大模型] DISC-LawLLM: an intelligent legal system powered by large language models (LLMs) to provide a wide range of legal services.☆769Updated 2 months ago
- 本项目旨在收集开源的表格智能任务数据集(比如表格问答、表格-文本生成等), 将原始数据整理为指令微调格式的数据并微调LLM,进而增强LLM对于表格数据的理解,最终构建出专门面向表格智能任务的大型语言模型。☆610Updated last year
- Deepspeed、LLM、Medical_Dialogue、医疗大模型、预训练、微调☆277Updated last year
- 开源SFT数据集整理,随时补充☆538Updated 2 years ago
- LexiLaw - 中文法律大模型☆916Updated 5 months ago
- 国内首个全参数训练的法律大模型 HanFei-1.0 (韩非)☆123Updated last year
- 雅意信息抽取大模型:在百万级人工构造的高质量信息抽取数据上进行指令微调,由中科闻歌算法团队研发。 (Repo for YAYI Unified Information Extraction Model)☆308Updated last year
- 🛰️ 基于真实医疗对话数据在ChatGLM上进行LoRA、P-Tuning V2、Freeze、RLHF等微调,我们的眼光不止于医疗问答☆328Updated last year
- PromptCBLUE: a large-scale instruction-tuning dataset for multi-task and few-shot learning in the medical domain in Chinese☆375Updated last year
- An open-source educational chat model from ICALK, East China Normal University. 开源中英教育对话大模型。( 通用基座模型,GPU部署,数据清理) 致敬: LLaMA, MOSS, BELLE, Z…☆836Updated last month
- 面向中文大模型价值观的评估与对齐研究☆534Updated 2 years ago
- Firefly中文LLaMA-2大模型,支持增量预训练Baichuan2、Llama2、Llama、Falcon、Qwen、Baichuan、InternLM、Bloom等大模型☆413Updated last year
- unified embedding model☆867Updated last year
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆208Updated last year
- 语言模型中文认知能力分析☆237Updated last year
- Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.☆609Updated 7 months ago
- ChatGLM2-6B 全参数微调,支持多轮对话的高效微调。☆400Updated 2 years ago
- 📝 An Awesome Collection of Chinese Legal Dataset and Relevant Resources. 致力于收集全面的中文法律数据源☆910Updated 2 years ago
- ☆351Updated last year
- ChatGLM-6B 指令学习|指令数据|Instruct☆655Updated 2 years ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆359Updated 2 years ago
- 基于ChatGLM-6B的中文问诊模型☆821Updated last year
- Tuning LLMs with no tears💦; Sample Design Engineering (SDE) for more efficient downstream-tuning.☆1,012Updated last year
- chatglm多gpu用deepspeed和☆409Updated last year
- "桃李“: 国际中文教育大模型☆183Updated last year
- 聚宝盆(Cornucopia): 中文金融系列开源可商用大模型,并提供一套高效轻量化的垂直领域LLM训练框架(Pretraining、SFT、RLHF、Quantize等)☆641Updated 2 years ago