pufanyi / syphus
Syphus: Automatic Instruction-Response Generation Pipeline
☆14Updated last year
Alternatives and similar repositories for syphus:
Users that are interested in syphus are comparing it to the libraries listed below
- Official Repository of Personalized Visual Instruct Tuning☆26Updated 3 months ago
- ☆26Updated 6 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆53Updated this week
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆25Updated 5 months ago
- Benchmarking and Analyzing Generative Data for Visual Recognition☆26Updated last year
- TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models☆27Updated 3 months ago
- OVO-Bench: How Far is Your Video-LLMs from Real-World Online Video Understanding?☆30Updated last month
- [NeurIPS-24] This is the official implementation of the paper "DeepStack: Deeply Stacking Visual Tokens is Surprisingly Simple and Effect…☆35Updated 8 months ago
- [ICLR 2025] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark☆75Updated last month
- Codes accompanying the paper "Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment"☆25Updated last week
- [NeurIPS 2024 D&B Track] Official Repo for "LVD-2M: A Long-take Video Dataset with Temporally Dense Captions"☆45Updated 4 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆116Updated 7 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated 10 months ago
- [ICLR 2025] CREMA: Generalizable and Efficient Video-Language Reasoning via Multimodal Modular Fusion☆40Updated last month
- VPEval Codebase from Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆44Updated last year
- 🔥 [CVPR 2024] Official implementation of "See, Say, and Segment: Teaching LMMs to Overcome False Premises (SESAME)"☆32Updated 8 months ago
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆54Updated 5 months ago
- (NeurIPS 2024) What Makes CLIP More Robust to Long-Tailed Pre-Training Data? A Controlled Study for Transferable Insights☆24Updated 3 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆34Updated 2 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆19Updated 2 months ago
- Accelerating Vision-Language Pretraining with Free Language Modeling (CVPR 2023)☆31Updated last year
- Codebase for the paper-Elucidating the design space of language models for image generation☆45Updated 3 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆24Updated 4 months ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆19Updated last month
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆37Updated 4 months ago
- Visual Programming for Text-to-Image Generation and Evaluation (NeurIPS 2023)☆54Updated last year