mbzuai-oryx / MobiLlama
MobiLlama : Small Language Model tailored for edge devices
☆615Updated 10 months ago
Alternatives and similar repositories for MobiLlama:
Users that are interested in MobiLlama are comparing it to the libraries listed below
- Strong and Open Vision Language Assistant for Mobile Devices☆1,118Updated 9 months ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆824Updated 6 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆967Updated 6 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆721Updated 11 months ago
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆858Updated last month
- A toolkit for inference and evaluation of 'mixtral-8x7b-32kseqlen' from Mistral AI☆766Updated last year
- MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.☆1,228Updated 2 months ago
- TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones☆1,259Updated 9 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,427Updated 10 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,183Updated 3 months ago
- A family of lightweight multimodal models.☆979Updated 2 months ago
- [ACL 2024] Progressive LLaMA with Block Expansion.☆496Updated 8 months ago
- HPT - Open Multimodal LLMs from HyperGAI☆313Updated 7 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆614Updated 3 months ago
- Codebase for Merging Language Models (ICML 2024)☆793Updated 8 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,058Updated last month
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,187Updated last year
- For releasing code related to compression methods for transformers, accompanying our publications☆405Updated last week
- ☆705Updated 10 months ago
- AI for all: Build the large graph of the language models☆252Updated 7 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆694Updated 4 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆599Updated 2 months ago
- [ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.☆757Updated 3 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,393Updated 7 months ago
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆585Updated last month
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆637Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆890Updated last week
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆618Updated 2 months ago
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆780Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆536Updated last month