kyegomez / NeVA
The open source implementation of "NeVA: NeMo Vision and Language Assistant"
☆18Updated last year
Alternatives and similar repositories for NeVA:
Users that are interested in NeVA are comparing it to the libraries listed below
- EdgeSAM model for use with Autodistill.☆26Updated 8 months ago
- video description generation vision-language model☆17Updated 3 weeks ago
- Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta☆16Updated 3 months ago
- ☆13Updated 11 months ago
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- Gradio app to track objects in video and add visual effects☆16Updated 4 months ago
- ☆30Updated last year
- ☆9Updated last year
- Multi-vision Sensor Perception and Reasoning (MS-PR) benchmark, assessing VLMs on their capacity for sensor-specific reasoning.☆13Updated last month
- Python scripts performing optical flow estimation using the NeuFlowV2 model in ONNX.☆40Updated 5 months ago
- ☆14Updated last year
- ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing☆67Updated 8 months ago
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆81Updated last year
- Finetune any model on HF in less than 30 seconds☆58Updated 2 weeks ago
- Use Grounding DINO, Segment Anything, and CLIP to label objects in images.☆26Updated last year
- My implementation of the model KosmosG from "KOSMOS-G: Generating Images in Context with Multimodal Large Language Models"☆14Updated 3 months ago
- ☆29Updated last year
- ☆14Updated last year
- ☆16Updated last year
- ☆68Updated 7 months ago
- Summarize any Arixv Paper with ease☆61Updated last year
- Streamlit app presented to the Streamlit LLMs Hackathon September 23☆15Updated 9 months ago
- ☆24Updated last year
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆15Updated 3 months ago
- Use Grounding DINO, Segment Anything, and GPT-4V to label images with segmentation masks for use in training smaller, fine-tuned models.☆65Updated last year
- My personal implementation of the model from "Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities", they haven't rel…☆13Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 6 months ago
- An plug in and play pipeline that utilizes segment anything to segment datasets with rich detail for downstream fine-tuning on vision mod…☆21Updated 11 months ago
- Official code repository for paper: "ExPLoRA: Parameter-Efficient Extended Pre-training to Adapt Vision Transformers under Domain Shifts"☆29Updated 4 months ago