Emericen / tiny-qwenLinks
A minimal PyTorch re-implementation of Qwen3 VL with a fancy CLI
☆308Updated last month
Alternatives and similar repositories for tiny-qwen
Users that are interested in tiny-qwen are comparing it to the libraries listed below
Sorting:
- Model compression toolkit engineered for enhanced usability, comprehensiveness, and efficiency.☆274Updated this week
- A collection of tricks and tools to speed up transformer models☆194Updated last month
- Fused Qwen3 MoE layer for faster training, compatible with Transformers, LoRA, bnb 4-bit quant, Unsloth. Also possible to train LoRA over…☆225Updated this week
- 青稞Talk☆186Updated last week
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆200Updated last month
- Based on Nano-vLLM, a simple replication of vLLM with self-contained paged attention and flash attention implementation☆191Updated this week
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆198Updated 6 months ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆95Updated last year
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆219Updated last week
- ☆444Updated 5 months ago
- Block Diffusion for Ultra-Fast Speculative Decoding☆349Updated 2 weeks ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆146Updated 9 months ago
- ☆196Updated 3 weeks ago
- A reproduction of the Deepseek-OCR model including training☆202Updated 2 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆467Updated 8 months ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆389Updated 2 months ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆252Updated 3 months ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆246Updated 8 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆280Updated 11 months ago
- A lightweight reinforcement learning framework that integrates seamlessly into your codebase, empowering developers to focus on algorithm…☆98Updated 4 months ago
- DeepSeek Native Sparse Attention pytorch implementation☆111Updated last month
- ☆83Updated 9 months ago
- GLM Series Edge Models☆156Updated 7 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆277Updated 3 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆120Updated 8 months ago
- Implementation of FlashAttention in PyTorch☆180Updated last year
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆790Updated last month
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆426Updated 3 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆330Updated 7 months ago
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆396Updated 2 weeks ago