SkunkworksAI / BakLLaVA
☆708Updated last year
Alternatives and similar repositories for BakLLaVA:
Users that are interested in BakLLaVA are comparing it to the libraries listed below
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆730Updated last year
- LLaVA-Interactive-Demo☆366Updated 7 months ago
- ☆832Updated 6 months ago
- Inference code for Persimmon-8B☆415Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆709Updated last year
- Customizable implementation of the self-instruct paper.☆1,040Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆574Updated last year
- Code for fine-tuning Platypus fam LLMs using LoRA☆628Updated last year
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆820Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,371Updated last week
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.☆182Updated 11 months ago
- A bagel, with everything.☆317Updated 11 months ago
- Salesforce open-source LLMs with 8k sequence length.☆717Updated last month
- ☆412Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,450Updated 11 months ago
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆855Updated last month
- Tune any FALCON in 4-bit☆466Updated last year
- llama.cpp with BakLLaVA model describes what does it see☆384Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆162Updated last year
- ☆694Updated this week
- ☆220Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆690Updated 11 months ago
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,372Updated 11 months ago
- ☆449Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆922Updated 5 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,125Updated 3 months ago
- ☆445Updated 11 months ago
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest…☆447Updated this week
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆646Updated 9 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆224Updated 10 months ago