OpenBMB / MobileCPM
A Toolkit for Running On-device Large Language Models (LLMs) in APP
☆61Updated 7 months ago
Alternatives and similar repositories for MobileCPM:
Users that are interested in MobileCPM are comparing it to the libraries listed below
- GLM Series Edge Models☆129Updated last week
- A demo built on Megrez-3B-Instruct, integrating a web search tool to enhance the model's question-and-answer capabilities.☆37Updated 2 months ago
- 我们是第一个完全可商用的角色大模型。☆39Updated 6 months ago
- Its an open source LLM based on MOE Structure.☆58Updated 8 months ago
- SUS-Chat: Instruction tuning done right☆48Updated last year
- Mixture-of-Experts (MoE) Language Model☆184Updated 5 months ago
- DRT-o1: Optimized Deep Reasoning Translation via Long Chain-of-Thought☆208Updated 2 months ago
- XVERSE-MoE-A4.2B: A multilingual large language model developed by XVERSE Technology Inc.☆36Updated 9 months ago
- ☆36Updated 4 months ago
- ☆78Updated 9 months ago
- Using Llama-3.1 70b on Groq to create o1-like reasoning chains☆19Updated 5 months ago
- ☆171Updated 3 weeks ago
- zero零训练llm调参☆31Updated last year
- 如需体验textin文档解析,请点击https://cc.co/16YSIy☆22Updated 7 months ago
- ☆213Updated last week
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆216Updated this week
- Imitate OpenAI with Local Models☆87Updated 6 months ago
- Delta-CoMe can achieve near loss-less 1-bit compressin which has been accepted by NeurIPS 2024☆53Updated 3 months ago
- TianMu: A modern AI tool with multi-platform support, markdown support, multimodal, continuous conversation, and customizable commands. 一…☆83Updated last year
- ☆225Updated 9 months ago
- MiniCPM on iOS.☆66Updated 8 months ago
- XVERSE-65B: A multilingual large language model developed by XVERSE Technology Inc.☆138Updated 10 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆128Updated 8 months ago
- ☆28Updated 6 months ago
- The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"☆260Updated 9 months ago
- ☆132Updated 9 months ago
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆232Updated 3 weeks ago
- 顾名思义:手搓的RAG☆120Updated last year
- Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory☆28Updated 9 months ago
- ☆105Updated last year