kyegomez / VisionLLaMA
Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta
☆16Updated 3 months ago
Alternatives and similar repositories for VisionLLaMA:
Users that are interested in VisionLLaMA are comparing it to the libraries listed below
- Evaluate the performance of computer vision models and prompts for zero-shot models (Grounding DINO, CLIP, BLIP, DINOv2, ImageBind, model…☆35Updated last year
- Pixel Parsing. A reproduction of OCR-free end-to-end document understanding models with open data☆21Updated 6 months ago
- Simple Implementation of TinyGPTV in super simple Zeta lego blocks☆15Updated 3 months ago
- PyTorch Implementation of the paper "MM1: Methods, Analysis & Insights from Multimodal LLM Pre-training"☆23Updated this week
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Updated 10 months ago
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆46Updated 2 months ago
- ☆58Updated 11 months ago
- ☆17Updated 11 months ago
- Visualize multi-model embedding spaces. The first goal is to quickly get a lay of the land of any embedding space. Then be able to scroll…☆27Updated 9 months ago
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated 11 months ago
- A minimal yet unstoppable blueprint for multi-agent AI—anchored by the rare, far-reaching “Multi-Agent AI DAO” (2017 Prior Art)—empowerin…☆23Updated last month
- Using multiple LLMs for ensemble Forecasting☆16Updated last year
- Notebooks to demonstrate TimmWrapper☆15Updated last month
- The open source implementation of the base model behind GPT-4 from OPENAI [Language + Multi-Modal]☆11Updated last year
- A list of language models with permissive licenses such as MIT or Apache 2.0☆24Updated 3 months ago
- Tools for merging pretrained large language models.☆19Updated 8 months ago
- EdgeSAM model for use with Autodistill.☆26Updated 8 months ago
- ☆59Updated this week
- LoRA fine-tuned Stable Diffusion Deployment☆31Updated 2 years ago
- Video-LlaVA fine-tune for CinePile evaluation☆46Updated 6 months ago
- OLA-VLM: Elevating Visual Perception in Multimodal LLMs with Auxiliary Embedding Distillation, arXiv 2024☆48Updated 2 weeks ago
- LLM reads a paper and produce a working prototype☆48Updated last week
- ☆14Updated last year
- ☆13Updated last year
- Cerule - A Tiny Mighty Vision Model☆67Updated 5 months ago
- A Data Source for Reasoning Embodied Agents☆19Updated last year
- Train, tune, and infer Bamba model☆83Updated 3 weeks ago
- Implementation of "PaLM2-VAdapter:" from the multi-modal model paper: "PaLM2-VAdapter: Progressively Aligned Language Model Makes a Stron…☆17Updated 3 months ago
- Code and pretrained models for the paper: "MatMamba: A Matryoshka State Space Model"☆57Updated 2 months ago