UbiquitousLearning / mllmLinks
Fast Multimodal LLM on Mobile Devices
☆1,370Updated last week
Alternatives and similar repositories for mllm
Users that are interested in mllm are comparing it to the libraries listed below
Sorting:
- Low-bit LLM inference on CPU/NPU with lookup table☆916Updated 8 months ago
- ☆43Updated 10 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,330Updated last year
- Awesome Mobile LLMs☆301Updated 2 months ago
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆615Updated last year
- TinyChatEngine: On-Device LLM Inference Library☆939Updated last year
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆672Updated 2 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆1,007Updated this week
- ☆120Updated this week
- The Qualcomm® AI Hub apps are a collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) a…☆369Updated last week
- Qualcomm® AI Hub Models is our collection of state-of-the-art machine learning models optimized for performance (latency, memory etc.) an…☆915Updated last week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆810Updated 11 months ago
- High-speed and easy-use LLM serving framework for local deployment☆145Updated 6 months ago
- llm-export can export llm model to onnx.☆343Updated 3 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,180Updated 4 months ago
- LLM inference in C/C++☆48Updated this week
- Fast inference from large lauguage models via speculative decoding☆886Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,875Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,431Updated 6 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,041Updated this week
- Demonstration of running a native LLM on Android device.☆226Updated this week
- 📰 Must-read papers and blogs on Speculative Decoding ⚡️☆1,117Updated 2 weeks ago
- A large-scale simulation framework for LLM inference☆530Updated 6 months ago
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆839Updated last week
- [TMLR 2024] Efficient Large Language Models: A Survey☆1,253Updated 7 months ago
- ☆437Updated 4 months ago
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,406Updated 9 months ago
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,169Updated last week
- Awesome LLM compression research papers and tools.☆1,771Updated 2 months ago
- ☆523Updated 2 weeks ago