kyegomez / AnyMAL
The open source implementation of "AnyMAL: An Efficient and Scalable Any-Modality Augmented Language Model"
☆21Updated 3 months ago
Alternatives and similar repositories for AnyMAL
Users that are interested in AnyMAL are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated last year
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 7 months ago
- survery of small language models☆15Updated 9 months ago
- ☆66Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆22Updated last year
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆51Updated 5 months ago
- minisora-DiT, a DiT reproduction based on XTuner from the open source community MiniSora☆41Updated last year
- On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability☆39Updated 3 weeks ago
- Official implementation of the paper "MMInA: Benchmarking Multihop Multimodal Internet Agents"☆43Updated 2 months ago
- ☆19Updated last year
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignment☆34Updated 10 months ago
- [CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model☆17Updated last year
- ☆38Updated last week
- ☆73Updated last year
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated 10 months ago
- ☆29Updated 9 months ago
- Touchstone: Evaluating Vision-Language Models by Language Models☆83Updated last year
- My implementation of the model KosmosG from "KOSMOS-G: Generating Images in Context with Multimodal Large Language Models"☆14Updated 6 months ago
- ☆32Updated 3 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆84Updated last week
- Vision-oriented multimodal AI☆49Updated 11 months ago
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆82Updated 6 months ago
- Document Haystacks: Vision-Language Reasoning Over Piles of 1000+ Documents, CVPR 2025☆18Updated 3 months ago
- ☆36Updated 8 months ago
- ControlLLM: Augment Language Models with Tools by Searching on Graphs☆191Updated 10 months ago
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Updated 6 months ago
- A simple reproducible template to implement AI research papers☆23Updated 8 months ago
- [EMNLP 2024] Official repository for paper "From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data Synthesis"☆18Updated 7 months ago
- [ACL 2024] PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain☆102Updated last year
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆24Updated 2 weeks ago