mbzuai-oryx / AINLinks
AIN - The First Arabic Inclusive Large Multimodal Model. It is a versatile bilingual LMM excelling in visual and contextual understanding across diverse domains.
β46Updated 5 months ago
Alternatives and similar repositories for AIN
Users that are interested in AIN are comparing it to the libraries listed below
Sorting:
- [ACL 2025 π₯] A Comprehensive Multi-Domain Benchmark for Arabic OCR and Document Understandingβ45Updated 2 months ago
- [NAACL 2025 π₯] CAMEL-Bench is an Arabic benchmark for evaluating multimodal models across eight domains with 29,000 questions.β32Updated 3 months ago
- [CVPR 2025 π₯] ALM-Bench is a multilingual multi-modal diverse cultural benchmark for 100 languages across 19 categories. It assesses theβ¦β43Updated 2 months ago
- [EMNLP'23] ClimateGPT: a specialized LLM for conversations related to Climate Change and Sustainability topics in both English and Arabiβ¦β79Updated 10 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, Hβ¦β83Updated last week
- Bio-Medical EXpert LMM with English and Arabic Language Capabilitiesβ69Updated 3 months ago
- Bilingual Medical Mixture of Experts LLMβ31Updated 8 months ago
- vision language models finetuning notebooks & use cases (Medgemma - paligemma - florence .....)β47Updated last month
- ARB: A Comprehensive Arabic Multimodal Reasoning Benchmarkβ15Updated 2 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRAβ326Updated 2 months ago
- VideoMathQA is a benchmark designed to evaluate mathematical reasoning in real-world educational videosβ15Updated last month
- [BMVC 2025] Official Implementation of the paper "PerSense: Personalized Instance Segmentation in Dense Images"β25Updated this week
- LR0.FM: Low-Resolution Zero-shot Classification Benchmark For Foundation Modelsβ17Updated last month
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.β94Updated 7 months ago
- This repository contains the official source code for SALT: Parameter-Efficient Fine-Tuning via Singular Value Adaptation with Low-Rank Tβ¦β25Updated last week
- β66Updated last month
- Composition of Multimodal Language Models From Scratchβ15Updated 11 months ago
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"β28Updated 3 months ago
- Official code repository for ICML 2025 paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domaβ¦β38Updated 3 weeks ago
- Code for "Enhancing In-context Learning via Linear Probe Calibration"β35Updated last year
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite foβ¦β49Updated 11 months ago
- [ACL 2025 π₯] Time Travel is a Comprehensive Benchmark to Evaluate LMMs on Historical and Cultural Artifactsβ18Updated 2 months ago
- β44Updated last year
- Learnable Weight Initialization for Volumetric Medical Image Segmentation [Elsevier AIM2024]β22Updated 9 months ago
- β37Updated 2 months ago
- [InterSpeech 2024] Official code repository of paper titled "Bird Whisperer: Leveraging Large Pre-trained Acoustic Model for Bird Call Clβ¦β35Updated 8 months ago
- Code, models, and data for "Advancements in Arabic Grammatical Error Detection and Correction: An Empirical Investigation". EMNLP 2023.β17Updated 11 months ago
- [ACCV 2024] ObjectCompose: Evaluating Resilience of Vision-Based Models on Object-to-Background Compositional Changes πππβ37Updated 6 months ago
- [CVPR 2024] KEPP: Why Not Use Your Textbook? Knowledge-Enhanced Procedure Planning of Instructional Videosβ12Updated 10 months ago
- This is the repo for the paper "PANGEA: A FULLY OPEN MULTILINGUAL MULTIMODAL LLM FOR 39 LANGUAGES"β110Updated last month