OpenBMB / MobileCPM
A Toolkit for Running On-device Large Language Models (LLMs) in APP
☆57Updated 4 months ago
Related projects ⓘ
Alternatives and complementary repositories for MobileCPM
- This is a user guide for the MiniCPM and MiniCPM-V series of small language models (SLMs) developed by ModelBest. “面壁小钢炮” focuses on achi…☆118Updated 3 weeks ago
- SUS-Chat: Instruction tuning done right☆47Updated 10 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆132Updated 7 months ago
- ☆213Updated 6 months ago
- Its an open source LLM based on MOE Structure.☆57Updated 4 months ago
- Mixture-of-Experts (MoE) Language Model☆180Updated 2 months ago
- zero零训练llm调参☆30Updated last year
- Imitate OpenAI with Local Models☆85Updated 2 months ago
- ☆123Updated last month
- ☆73Updated 11 months ago
- 源自PP-Structure的表格识别算法,模型转换为ONNX,推理引擎采用ONNXRuntime,部署简单,无内存泄露问题。☆70Updated last week
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆36Updated 6 months ago
- 我们是第一个完全可商用的角色大模型。☆36Updated 3 months ago
- ☆33Updated last month
- ☆77Updated 6 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆257Updated 6 months ago
- TianMu: A modern AI tool with multi-platform support, markdown support, multimodal, continuous conversation, and customizable commands. 一…☆84Updated last year
- The official implementation of paper "ToolGen: Unified Tool Retrieval and Calling via Generation"☆99Updated 3 weeks ago
- ☆92Updated 6 months ago
- 顾名思义:手搓的RAG☆111Updated 8 months ago
- ☆78Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆126Updated 5 months ago
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 7 months ago
- A light proxy solution for HuggingFace hub.☆44Updated last year
- Analysis of Chinese and English layouts 中英文版面分析☆126Updated last month
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆36Updated 2 months ago
- Using Llama-3.1 70b on Groq to create o1-like reasoning chains☆19Updated last month
- 中文原生检索增强生成测评基准☆100Updated 7 months ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆70Updated last year
- 利用免费的大模型api来结合你的私域数据来生成sft训练数据(妥妥白嫖)支持llamafactory等工具的训练数据格式☆84Updated last week