kyegomez / NeVALinks
The open source implementation of "NeVA: NeMo Vision and Language Assistant"
β17Updated 2 years ago
Alternatives and similar repositories for NeVA
Users that are interested in NeVA are comparing it to the libraries listed below
Sorting:
- Unofficial implementation and experiments related to Set-of-Mark (SoM) ποΈβ88Updated 2 years ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, modelβ¦β37Updated 2 years ago
- β43Updated last year
- EdgeSAM model for use with Autodistill.β29Updated last year
- β54Updated 2 years ago
- β69Updated last year
- Streamlit app presented to the Streamlit LLMs Hackathon September 23β16Updated last year
- β15Updated 2 years ago
- β29Updated 2 years ago
- The Next Generation Multi-Modality Superintelligenceβ70Updated last year
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editingβ70Updated last year
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, Hβ¦β84Updated 6 months ago
- This is the repository for the Photorealistic Unreal Graphics (PUG) datasets for representation learning.β237Updated last year
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.β66Updated 2 years ago
- Documentation, notes, links, etc for streams.β84Updated last year
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zetaβ16Updated last year
- Finetune any model on HF in less than 30 secondsβ56Updated last week
- Implementation of the text to video model LUMIERE from the paper: "A Space-Time Diffusion Model for Video Generation" by Google Researchβ52Updated last year
- This repository holds the "Fully automated landmarking and facial segmentation on 3D photographs" filesβ30Updated 2 years ago
- GPT-4V(ision) module for use with Autodistill.β25Updated last year
- β59Updated last year
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.β134Updated last year
- β63Updated last year
- A multi-modal AI Model that can generate high quality novel videos with text, images, or video clips.β64Updated 2 years ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.β35Updated 2 years ago
- β17Updated 2 years ago
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.β69Updated last year
- Official Code for Tracking Any Object Amodallyβ120Updated last year
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scraβ¦β54Updated 2 years ago
- β20Updated 10 months ago