facebookresearch / MobileLLM
MobileLLM Optimizing Sub-billion Parameter Language Models for On-Device Use Cases. In ICML 2024.
☆1,292Updated 2 weeks ago
Alternatives and similar repositories for MobileLLM:
Users that are interested in MobileLLM are comparing it to the libraries listed below
- nanoGPT style version of Llama 3.1☆1,363Updated 9 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆867Updated last week
- Minimalistic large language model 3D-parallelism training☆1,836Updated this week
- DataComp for Language Models☆1,292Updated last month
- Official implementation of Half-Quadratic Quantization (HQQ)☆807Updated this week
- OLMoE: Open Mixture-of-Experts Language Models☆739Updated last month
- Everything about the SmolLM2 and SmolVLM family of models☆2,273Updated last month
- MINT-1T: A one trillion token multimodal interleaved dataset.☆810Updated 9 months ago
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆632Updated last week
- TinyChatEngine: On-Device LLM Inference Library☆843Updated 10 months ago
- Reaching LLaMA2 Performance with 0.1M Dollars☆980Updated 9 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,526Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,005Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,984Updated 3 weeks ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,479Updated last year
- Recipes to scale inference-time compute of open models☆1,066Updated 2 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,500Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,145Updated this week
- NanoGPT (124M) in 3 minutes☆2,520Updated last week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,243Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,671Updated last week
- A pytorch quantization backend for optimum☆928Updated 2 weeks ago
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,048Updated 3 months ago
- Muon is Scalable for LLM Training☆1,039Updated last month
- AllenAI's post-training codebase☆2,942Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,549Updated 6 months ago
- ☆864Updated last year
- ☆868Updated 7 months ago
- ☆2,928Updated 7 months ago
- Code for BLT research paper☆1,558Updated this week