ByteDance-BandAI / LLM-ILinks
π LLM-I: Transform LLMs into natural interleaved multimodal creators! β¨ Tool-use framework supporting image search, generation, code execution & editing
β37Updated 2 months ago
Alternatives and similar repositories for LLM-I
Users that are interested in LLM-I are comparing it to the libraries listed below
Sorting:
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agentsβ47Updated 10 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Modelsβ28Updated last year
- Resa: Transparent Reasoning Models via SAEsβ47Updated 3 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMsβ43Updated last year
- The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"β134Updated 4 months ago
- CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learningβ33Updated 4 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understandingβ53Updated last year
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"β36Updated 11 months ago
- [COLM 2025] "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"β19Updated 9 months ago
- Geometric-Mean Policy Optimizationβ96Updated last month
- The official repo for βUnleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problemβ [EMNLP25]β33Updated 4 months ago
- β16Updated last year
- [NeurIPS 2024] A task generation and model evaluation system for multimodal language models.β73Updated last year
- Multimodal RewardBenchβ58Updated 10 months ago
- [EMNLP 2025 Main] AlphaOne: Reasoning Models Thinking Slow and Fast at Test Timeβ89Updated 7 months ago
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search" [EMNLP25]β36Updated 4 months ago
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"β17Updated 10 months ago
- β38Updated 4 months ago
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Frameworkβ71Updated 7 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]β77Updated 6 months ago
- β19Updated 10 months ago
- β50Updated 7 months ago
- The open-source code of MetaStone-S1.β106Updated 5 months ago
- Beyond Real: Imaginary Extension of Rotary Position Embeddings for Long-Context LLMsβ32Updated last month
- Sotopia-RL: Reward Design for Social Intelligenceβ46Updated 4 months ago
- [NeurIPS'24 LanGame workshop] On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizabilityβ41Updated 6 months ago
- Evaluation framework for paper "VisualWebBench: How Far Have Multimodal LLMs Evolved in Web Page Understanding and Grounding?"β62Updated last year
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignmentβ35Updated last year
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvementβ128Updated 5 months ago
- A Recipe for Building LLM Reasoners to Solve Complex Instructionsβ29Updated 3 months ago