facebookresearch / MobileLLMLinks
MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.
☆1,379Updated 6 months ago
Alternatives and similar repositories for MobileLLM
Users that are interested in MobileLLM are comparing it to the libraries listed below
Sorting:
- nanoGPT style version of Llama 3.1☆1,438Updated last year
- DataComp for Language Models☆1,375Updated last month
- MINT-1T: A one trillion token multimodal interleaved dataset.☆826Updated last year
- Minimalistic large language model 3D-parallelism training☆2,267Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆915Updated 5 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,332Updated last month
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,614Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,060Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,141Updated 3 weeks ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆883Updated last month
- OLMoE: Open Mixture-of-Experts Language Models☆888Updated last month
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- TinyChatEngine: On-Device LLM Inference Library☆903Updated last year
- Open weights language model from Google DeepMind, based on Griffin.☆652Updated 4 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,610Updated 11 months ago
- llama3.np is a pure NumPy implementation for Llama 3 model.☆989Updated 5 months ago
- Code for BLT research paper☆1,995Updated 5 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆659Updated 5 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,318Updated 3 months ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,288Updated 7 months ago
- Recipes to scale inference-time compute of open models☆1,111Updated 5 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,859Updated last year
- TinyGPT-V: Efficient Multimodal Large Language Model via Small Backbones☆1,297Updated last year
- Implementing DeepSeek R1's GRPO algorithm from scratch☆1,621Updated 6 months ago
- PyTorch native quantization and sparsity for training and inference☆2,438Updated this week
- Serving multiple LoRA finetuned LLM as one☆1,101Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,399Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,106Updated this week
- Muon is Scalable for LLM Training☆1,336Updated 2 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,260Updated 5 months ago