kyegomez / NeVALinks
The open source implementation of "NeVA: NeMo Vision and Language Assistant"
β17Updated 2 years ago
Alternatives and similar repositories for NeVA
Users that are interested in NeVA are comparing it to the libraries listed below
Sorting:
- Unofficial implementation and experiments related to Set-of-Mark (SoM) ποΈβ88Updated 2 years ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, modelβ¦β37Updated 2 years ago
- The Next Generation Multi-Modality Superintelligenceβ70Updated last year
- Finetune any model on HF in less than 30 secondsβ56Updated last week
- β69Updated last year
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editingβ70Updated last year
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.β66Updated 2 years ago
- β54Updated 2 years ago
- β59Updated last year
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, Hβ¦β84Updated 5 months ago
- EdgeSAM model for use with Autodistill.β29Updated last year
- β15Updated 2 years ago
- This is the repository for the Photorealistic Unreal Graphics (PUG) datasets for representation learning.β237Updated last year
- Documentation, notes, links, etc for streams.β84Updated last year
- Streamlit app presented to the Streamlit LLMs Hackathon September 23β16Updated last year
- An EXA-Scale repository of Multi-Modality AI resources from papers and models, to foundational libraries!β40Updated last year
- An plug in and play pipeline that utilizes segment anything to segment datasets with rich detail for downstream fine-tuning on vision modβ¦β20Updated last year
- Maybe the new state of the art vision model? we'll see π€·ββοΈβ170Updated 2 years ago
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zetaβ16Updated last year
- Repository for the paper: "TiC-CLIP: Continual Training of CLIP Models" ICLR 2024β110Updated last year
- β43Updated last year
- Internet Explorer explores the web in a self-supervised manner to progressively find relevant examples that improve performance on a desiβ¦β163Updated 2 years ago
- A multi-modal AI Model that can generate high quality novel videos with text, images, or video clips.β64Updated 2 years ago
- Summarize any Arixv Paper with easeβ66Updated 2 years ago
- Visual RAG using less than 300 lines of code.β29Updated last year
- Implementation of the text to video model LUMIERE from the paper: "A Space-Time Diffusion Model for Video Generation" by Google Researchβ52Updated 11 months ago
- Implementation of the paper: "BRAVE : Broadening the visual encoding of vision-language models"β25Updated this week
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"β248Updated 11 months ago
- Multi-model video-to-text by combining embeddings from Flan-T5 + CLIP + Whisper + SceneGraph. The 'backbone LLM' is pre-trained from scraβ¦β54Updated 2 years ago
- Cerule - A Tiny Mighty Vision Modelβ68Updated 2 months ago