kyegomez / AnyMALLinks
The open source implementation of "AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model"
☆21Updated 8 months ago
Alternatives and similar repositories for AnyMAL
Users that are interested in AnyMAL are comparing it to the libraries listed below
Sorting:
- Vision-oriented multimodal AI☆49Updated last year
- minisora-DiT, a DiT reproduction based on XTuner from the open source community MiniSora☆40Updated last year
- survery of small language models☆16Updated last year
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆27Updated last year
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆41Updated 3 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆193Updated last year
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆90Updated 2 years ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆46Updated 7 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆35Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆17Updated last year
- ☆66Updated 2 years ago
- MLLM-DataEngine: An Iterative Refinement Approach for MLLM☆48Updated last year
- Code for our Paper "All in an Aggregated Image for In-Image Learning"☆29Updated last year
- Official repo for StableLLAVA☆94Updated last year
- ☆13Updated 2 years ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆48Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆52Updated 10 months ago
- INF-LLaVA: Dual-perspective Perception for High-Resolution Multimodal Large Language Model☆42Updated last year
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR 2025]☆77Updated 3 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆49Updated 2 months ago
- The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"☆113Updated last month
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆103Updated last year
- ☆28Updated last month
- An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08…☆31Updated 4 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆86Updated last year
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆142Updated 11 months ago
- ☆55Updated 5 months ago