kyegomez / VisionLLaMALinks
Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta
☆16Updated 7 months ago
Alternatives and similar repositories for VisionLLaMA
Users that are interested in VisionLLaMA are comparing it to the libraries listed below
Sorting:
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆36Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 10 months ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆24Updated 3 weeks ago
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆16Updated 7 months ago
- The open source implementation of "NeVA: NeMo Vision and Language Assistant"☆18Updated last year
- EdgeSAM model for use with Autodistill.☆27Updated last year
- ☆17Updated last year
- A Data Source for Reasoning Embodied Agents☆19Updated last year
- ☆58Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆14Updated 2 years ago
- My personal implementation of the model from "Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities", they haven't rel…☆13Updated last year
- The open source implementation of the base model behind GPT-4 from OPENAI [Language + Multi-Modal]☆10Updated last year
- A dashboard for exploring timm learning rate schedulers☆19Updated 7 months ago
- LoRA fine-tuned Stable Diffusion Deployment☆31Updated 2 years ago
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆27Updated last year
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆59Updated 7 months ago
- A list of language models with permissive licenses such as MIT or Apache 2.0☆24Updated 3 months ago
- This repository includes the code to download the curated HuggingFace papers into a single markdown formatted file☆14Updated 10 months ago
- Set of scripts to finetune LLMs☆37Updated last year
- Code, results and other artifacts from the paper introducing the WildChat-50m dataset and the Re-Wild model family.☆29Updated 2 months ago
- Load any clip model with a standardized interface☆21Updated last year
- Implementation of the LDP module block in PyTorch and Zeta from the paper: "MobileVLM: A Fast, Strong and Open Vision Language Assistant …☆16Updated last year
- The open source community's implementation of the all-new Multi-Modal Causal Attention from "DeepSpeed-VisualChat: Multi-Round Multi-Imag…☆12Updated last year
- Description and applications of OpenAI's paper about DALL-E (2021) and implementation of other (CLIP-guided) zero-shot text-to-image gene…☆33Updated 2 years ago
- An plug in and play pipeline that utilizes segment anything to segment datasets with rich detail for downstream fine-tuning on vision mod…☆21Updated last year
- The Next Generation Multi-Modality Superintelligence☆71Updated 9 months ago
- Tools for content datamining and NLP at scale☆43Updated last year
- ☆63Updated 9 months ago
- Tools for merging pretrained large language models.☆19Updated last year