howard-hou / VisualRWKV
VisualRWKV is the visual-enhanced version of the RWKV language model, enabling RWKV to handle various visual tasks.
☆222Updated last month
Alternatives and similar repositories for VisualRWKV
Users that are interested in VisualRWKV are comparing it to the libraries listed below
Sorting:
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆134Updated 3 months ago
- ☆121Updated 3 weeks ago
- Scaling RWKV-Like Architectures for Diffusion Models☆127Updated last year
- This is an inference framework for the RWKV large language model implemented purely in native PyTorch. The official native implementation…☆128Updated 9 months ago
- rwkv finetuning☆36Updated last year
- Official PyTorch implementation of LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation.☆32Updated last month
- Reinforcement Learning Toolkit for RWKV.(v6,v7,ARWKV) Distillation,SFT,RLHF(DPO,ORPO), infinite context training, Aligning. Exploring the…☆40Updated this week
- [ICLR 2025 Spotlight] Vision-RWKV: Efficient and Scalable Visual Perception with RWKV-Like Architectures☆456Updated 2 months ago
- The WorldRWKV project aims to implement training and inference across various modalities using the RWKV7 architecture. By leveraging diff…☆44Updated last month
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆136Updated 3 months ago
- ☆188Updated 10 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 4 months ago
- ☆18Updated 4 months ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆193Updated 2 weeks ago
- Pytorch implementation of https://arxiv.org/html/2404.07143v1☆20Updated last year
- This project is established for real-time training of the RWKV model.☆49Updated 11 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆157Updated 4 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆146Updated last month
- Explore the Limits of Omni-modal Pretraining at Scale☆97Updated 8 months ago
- RWKV in nanoGPT style☆189Updated 11 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆28Updated last week
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆276Updated 4 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆376Updated 3 weeks ago
- ☆22Updated 4 months ago
- ☆34Updated 9 months ago
- Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆194Updated last month
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆361Updated this week
- ☆132Updated 5 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆348Updated last week
- 用户友好、开箱即用的 RWKV Prompts 示例,适用于所有用户。Awesome RWKV Prompts for general users, more user-friendly, ready-to-use prompt examples.☆34Updated 3 months ago