nengelmann / Fuyu-8B---ExplorationLinks
Exploration of the multi modal fuyu-8b model of Adept. π€ π
β27Updated last year
Alternatives and similar repositories for Fuyu-8B---Exploration
Users that are interested in Fuyu-8B---Exploration are comparing it to the libraries listed below
Sorting:
- Our 2nd-gen LMMβ34Updated last year
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.β38Updated last year
- β29Updated last year
- β28Updated last year
- β74Updated last year
- imagetokenizer is a python package, helps you encoder visuals and generate visuals token ids from codebook, supports both image and videoβ¦β37Updated last year
- β57Updated last year
- An End-to-End Model with Adaptive Filtering for Retrieval-Augmented Generationβ15Updated last year
- Simple Implementation of TinyGPTV in super simple Zeta lego blocksβ15Updated 11 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMsβ43Updated last year
- In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuningβ35Updated 2 years ago
- A tiny, didactical implementation of LLAMA 3β42Updated 10 months ago
- An efficient multi-modal instruction-following data synthesis tool and the official implementation of Oasis https://arxiv.org/abs/2503.08β¦β32Updated 4 months ago
- XVERSE-MoE-A36B: A multilingual large language model developed by XVERSE Technology Inc.β38Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMsβ91Updated last year
- helper functions for processing and integrating visual language information with Qwen-VL Series Modelβ15Updated last year
- [NAACL 2025] Representing Rule-based Chatbots with Transformersβ22Updated 8 months ago
- A multimodal large-scale model, which performs close to the closed-source Qwen-VL-PLUS on many datasets and significantly surpasses the pβ¦β14Updated last year
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.β65Updated last year
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Groupβ70Updated last month
- Chinese CLIP models with SOTA performance.β59Updated 2 years ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Modelsβ27Updated last year
- EfficientSAM + YOLO World base model for use with Autodistill.β10Updated last year
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilinguaβ¦β63Updated 5 months ago
- Fast LLM Training CodeBase With dynamic strategy choosing [Deepspeed+Megatron+FlashAttention+CudaFusionKernel+Compiler];β41Updated last year
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation, arXiv 2024β64Updated last week
- Official code for infimm-hdβ16Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.β231Updated last year
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agentsβ46Updated 8 months ago
- Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parametersβ90Updated 2 years ago