facebookresearch / MobileLLMLinks
MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.
☆1,305Updated 2 months ago
Alternatives and similar repositories for MobileLLM
Users that are interested in MobileLLM are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆883Updated 2 months ago
- Everything about the SmolLM2 and SmolVLM family of models☆2,606Updated this week
- DataComp for Language Models☆1,318Updated 3 months ago
- nanoGPT style version of Llama 3.1☆1,389Updated 10 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆837Updated last week
- Minimalistic large language model 3D-parallelism training☆1,956Updated last week
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆646Updated 2 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,553Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆792Updated 3 months ago
- Recipes to scale inference-time compute of open models☆1,099Updated last month
- MINT-1T: A one trillion token multimodal interleaved dataset.☆817Updated 11 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,022Updated 11 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,549Updated last year
- AllenAI's post-training codebase☆3,033Updated this week
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,385Updated last year
- Code for BLT research paper☆1,690Updated last month
- [ICLR-2025-SLLM Spotlight 🔥]MobiLlama : Small Language Model tailored for edge devices☆647Updated last month
- PyTorch native quantization and sparsity for training and inference☆2,138Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,258Updated 3 months ago
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,451Updated 2 months ago
- 4M: Massively Multimodal Masked Modeling☆1,739Updated last month
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,113Updated 3 weeks ago
- NanoGPT (124M) in 3 minutes☆2,721Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,670Updated this week
- A modern model graph visualizer and debugger☆1,262Updated this week
- TinyChatEngine: On-Device LLM Inference Library☆869Updated 11 months ago
- A pytorch quantization backend for optimum☆958Updated 2 weeks ago
- Muon is Scalable for LLM Training☆1,087Updated 3 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,200Updated last month
- Democratizing Reinforcement Learning for LLMs☆3,411Updated last month