TylunasLi / fastllm
纯c++的全平台llm加速库,支持python调用,支持chatglm-6B, llama, baichuan, moss基座,x86 / ARM
☆9Updated this week
Alternatives and similar repositories for fastllm:
Users that are interested in fastllm are comparing it to the libraries listed below
- 介绍docker、docker compose的使用。☆20Updated 5 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆128Updated 2 months ago
- ☆37Updated 9 months ago
- ☆15Updated 7 months ago
- accelerate generating vector by using onnx model☆14Updated last year
- ☆59Updated 3 months ago
- ChatGPT WebUI using gradio. 给 LLM 对话和检索知识问答RAG提供一个简单好用的Web UI界面☆110Updated 5 months ago
- bge推理优化相关脚本☆27Updated last year
- Imitate OpenAI with Local Models☆85Updated 5 months ago
- 专注于对话系统领域的技术分享,重点写《Dify应用操作和源码剖析》专栏。☆75Updated 7 months ago
- aigc_serving lightweight and efficient Language service model reasoning☆24Updated 8 months ago
- 基于 Langchain,快速集成GLM-4 AllTools 功能的插件☆46Updated 6 months ago
- gpt_server是一个用于生产级部署LLMs或Embedding的开源框架。☆151Updated this week
- 使用langchain进行任务规划,构建子任务的会话场景资源,通过MCTS任务执行器,来让每个子任务通过在上下文中资源,通过自身反思探索来获取自身对问题的最优答案;这种方式依赖模型的对齐偏好,我们在每种偏好上设计了一个工程框架,来完成自我对不同答案的奖励进行采样策略☆25Updated this week
- ChatGLM2-6B微调, SFT/LoRA, instruction finetune☆105Updated last year
- llms related stuff , including code, docs☆12Updated this week
- 智谱AI 2024年金融行业大模型挑战赛仓库☆39Updated last week
- 基于大模型生成内容的智能语音对讲☆10Updated 3 months ago
- 部署你自己的OpenAI api🤩, 基于flask, transformers (使用 Baichuan2-13B-Chat-4bits 模型, 可以运行在单张Tesla T4显卡) ,实现了OpenAI中Chat, Models和Completions接口,包含流式响…☆89Updated last year
- ☆120Updated 8 months ago
- 一起来养一只拥有专属记忆的AI猫猫吧!☆10Updated 3 months ago
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated 10 months ago
- qwen-7b and qwen-14b finetuning☆90Updated 9 months ago
- 通用简单工具项目☆15Updated 4 months ago
- Pytorch implementation of JointBERT: "BERT for Joint Intent Classification and Slot Filling"☆31Updated last year
- LLM RAG 应用,支持 API 调用,语音交互。☆10Updated 7 months ago
- llama inference for tencentpretrain☆97Updated last year
- chatglm3base模型的有监督微调SFT☆74Updated last year
- (1)弹性区间标准化的旋转位置词嵌入编码器+peft LORA量化训练,提高万级tokens性能支持。(2)证据理论解释学习,提升模型的复杂逻辑推理能力(3)兼容alpaca数据格式。☆45Updated last year
- 使用qlora对中文大语言模型进行微调,包含ChatGLM、Chinese-LLaMA-Alpaca、BELLE☆85Updated last year