iMagist486 / Chatbot-Slot-Filling
Chatbot slot filling based on LLM + langchian|多轮对话槽值填充。
☆26Updated last year
Alternatives and similar repositories for Chatbot-Slot-Filling:
Users that are interested in Chatbot-Slot-Filling are comparing it to the libraries listed below
- ☆20Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆85Updated last year
- share data, prompt data , pretraining data☆35Updated last year
- 中文原生检索增强生成测评基准☆107Updated 9 months ago
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆89Updated last year
- DST(Dialogue State Tracker) for LLM(Large Language Model)☆22Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆127Updated last month
- Evaluation for AI apps and agent☆36Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆105Updated last year
- ☆30Updated 10 months ago
- A repo for update and debug Mixtral-7x8B、MOE、ChatGLM3、LLaMa2、 BaChuan、Qwen an other LLM models include new models mixtral, mixtral 8x7b, …☆42Updated 3 weeks ago
- ☆58Updated 3 months ago
- ☆62Updated 4 months ago
- Imitate OpenAI with Local Models☆85Updated 5 months ago
- 千问14B和7B的逐行解释☆54Updated last year
- zero零训练llm调参☆31Updated last year
- moss chat finetuning☆50Updated 9 months ago
- 部署你自己的OpenAI api🤩, 基于flask, transformers (使用 Baichuan2-13B-Chat-4bits 模型, 可以运行在单张Tesla T4显卡) ,实现了OpenAI中Chat, Models和Completions接口,包含流式响…☆88Updated last year
- optimize your prompt like promptperfect|万能提示词|大语言模型提示词优化☆37Updated last year
- 李鲁鲁老师对 吴恩达《ChatGPT Prompt Engineering for Developers》课程中文版的实践☆131Updated last year
- llama inference for tencentpretrain☆97Updated last year
- ☆186Updated last month
- AGI模块库架构图☆75Updated last year
- TianGong-AI-Unstructure☆56Updated this week
- Implement OpenAI APIs and plugin-enabled ChatGPT with open source LLM and other models.☆121Updated 7 months ago
- DSPy中文文档☆24Updated 7 months ago
- ☆105Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 7 months ago
- chatglm-6b微调/LORA/PPO/推理, 样本为自动生成的整数/小数加减乘除运算, 可gpu/cpu☆164Updated last year
- large language model training-3-stages+deployment☆47Updated last year