Crossme0809 / frenzyTechAI
☆25Updated 8 months ago
Related projects: ⓘ
- Agentica: Build Multi-Agent Workflow with 10 lines code.☆62Updated this week
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆86Updated last year
- 首个llama2 13b 中文版模型 (Base + 中文对话SFT,实现流畅多轮人机自然语言交互)☆89Updated last year
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆107Updated last year
- ChatGPT WebUI using gradio. 给 LLM 对话和检索知识问答RAG提供一个简单好用的Web UI界面☆84Updated 3 weeks ago
- Imitate OpenAI with Local Models☆83Updated 3 weeks ago
- the newest version of llama3,source code explained line by line using Chinese☆21Updated 5 months ago
- 大语言模型微调的项目,包含了使用QLora微调ChatGLM和LLama☆24Updated last year
- ✏️0成本LLM微调上手项目,⚡️一步一步使用colab训练法律LLM,基于microsoft/phi-1_5、chatglm3,包含lora微调,全参微调☆54Updated 8 months ago
- 骆驼QA,中文大语言阅读理解模型。☆71Updated last year
- moss chat finetuning☆50Updated 4 months ago
- 基于ChatGLM3基座模型和LLAMA-Factory框架进行微调的一个中医问答机器人☆64Updated 8 months ago
- TianGong-AI-Unstructure☆48Updated last week
- 中文原生检索增强生成测评基准☆92Updated 5 months ago
- share data, prompt data , pretraining data☆35Updated 9 months ago
- 支持ChatGLM2 lora微调☆39Updated last year
- Retrieval Augmented Generation (RAG) implementation through libraries like Tavily, LangChain, ChatGLM3☆23Updated 9 months ago
- ☆37Updated 5 months ago
- 千问14B和7B的逐行解释☆46Updated 11 months ago
- Ziya-LLaMA-13B是IDEA基于LLaMa的130亿参数的大规模预训练模型,具备翻译,编程,文本分类,信息抽取,摘要,文案生成,常识问答和数学计算等能力。目前姜子牙通用大模型已完成大规模预训练、多任务有监督微调和人类反馈学习三阶段的训练过程。本文主要用于Ziya-…☆42Updated last year
- open-o1: Using GPT-4o with CoT to Create o1-like Reasoning Chains☆26Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆120Updated 9 months ago
- NLP 项目记录档案☆41Updated last month
- 大语言模型训练和服务调研☆32Updated last year
- 基于 LoRA 和 P-Tuning v2 的 ChatGLM-6B 高效参数微调☆54Updated last year
- The LLM of NL2GQL with NebulaGraph or Neo4j☆83Updated 9 months ago
- 旨在对当前主流LLM进行一个直观、具体、标准的评测☆92Updated last year
- Qwen-Efficient-Tuning☆40Updated last year
- 适用于ChatGLM微调的数据集生成器, 支持多轮对话☆12Updated last year
- ☆72Updated last year