Adapting LLaMA Decoder to Vision Transformer
☆30May 20, 2024Updated last year
Alternatives and similar repositories for iLLaMA
Users that are interested in iLLaMA are comparing it to the libraries listed below
Sorting:
- [TACL] Do Vision and Language Models Share Concepts? A Vector Space Alignment Study☆16Nov 22, 2024Updated last year
- Official repository for "Boosting Adversarial Transferability using Dynamic Cues " (ICLR 2023)☆20Aug 24, 2023Updated 2 years ago
- ☆24May 23, 2025Updated 9 months ago
- Multi-Granularity Language-Guided Multi-Object Tracking☆24Nov 3, 2025Updated 3 months ago
- An Open-source Factuality Evaluation Demo for LLMs☆32Updated this week
- Official InfiniBench: A Benchmark for Large Multi-Modal Models in Long-Form Movies and TV Shows☆19Nov 4, 2025Updated 3 months ago
- [MICCAI 2025] Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathology☆12Jun 17, 2025Updated 8 months ago
- [MICCAI 2024] Official code for the paper "MedContext: Learning Contextual Cues for Efficient Volumetric Medical Segmentation"☆14Nov 1, 2024Updated last year
- A new multi-task learning framework using Vision Transformers☆11Jun 19, 2024Updated last year
- A Novel Semantic Segmentation Network using Enhanced Boundaries in Cluttered Scenes (WACV 2025)☆11Aug 11, 2025Updated 6 months ago
- Video-Panda: Parameter-efficient Alignment for Encoder-free Video-Language Models [CVPR 2025]☆79Jun 24, 2025Updated 8 months ago
- Official Implementation of Video-MA2MBA☆12Dec 3, 2024Updated last year
- ☆14Apr 25, 2025Updated 10 months ago
- ReaRAG: Knowledge-guided Reasoning Enhances Factuality of Large Reasoning Models with Iterative Retrieval Augmented Generation☆25Aug 24, 2025Updated 6 months ago
- ☆12Dec 4, 2024Updated last year
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆52Jul 24, 2025Updated 7 months ago
- [ICCV 2025] Dynamic-VLM☆28Dec 16, 2024Updated last year
- All in One: Exploring Unified Vision-Language Tracking with Multi-Modal Alignment☆19Feb 11, 2025Updated last year
- Learnable Weight Initialization for Volumetric Medical Image Segmentation [Elsevier AIM2024]☆22Oct 27, 2024Updated last year
- [ CVPR 2025 🔥] STING-BEE, the first domain-aware visual AI assistant for X-ray baggage security screening.☆23Jun 27, 2025Updated 8 months ago
- PyTorch implementation of "Sample- and Parameter-Efficient Auto-Regressive Image Models" from CVPR 2025☆14Nov 21, 2025Updated 3 months ago
- Implementation of the paper LIMITR: Leveraging Local Information for Medical Image-Text Representation☆17Feb 8, 2024Updated 2 years ago
- [CVPR 2023] Bridging Precision and Confidence: A Train-Time Loss for Calibrating Object Detection☆30Jun 21, 2023Updated 2 years ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆94Sep 14, 2024Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Nov 7, 2024Updated last year
- [BMVC 2024] On Evaluating Adversarial Robustness of Volumetric Medical Segmentation Models☆15Nov 1, 2024Updated last year
- Official Pytorch implementation of 'Facing the Elephant in the Room: Visual Prompt Tuning or Full Finetuning'? (ICLR2024)☆13Mar 8, 2024Updated last year
- [ECCV'24 Oral] PiTe: Pixel-Temporal Alignment for Large Video-Language Model☆17Feb 13, 2025Updated last year
- ☆21Jul 25, 2025Updated 7 months ago
- Learning 1D Causal Visual Representation with De-focus Attention Networks☆35Jun 7, 2024Updated last year
- Official code repository of Shuffle-R1☆25Jan 27, 2026Updated last month
- [AAAI 2026] Segment Anything Across Shots: A Method and Benchmark☆27Nov 16, 2025Updated 3 months ago
- ☆19Jun 29, 2025Updated 8 months ago
- ☆13Jul 20, 2024Updated last year
- [ECCVW 2024 -- ORAL] Official repository of paper titled "Makeup-Guided Facial Privacy Protection via Untrained Neural Network Priors".☆12Oct 11, 2024Updated last year
- ☆15Jul 24, 2022Updated 3 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated last year
- [ICLR 2025] IDA-VLM: Towards Movie Understanding via ID-Aware Large Vision-Language Model☆37Nov 27, 2024Updated last year
- ☆38Feb 6, 2025Updated last year