qnguyen3 / hermes-llavaLinks
☆53Updated last year
Alternatives and similar repositories for hermes-llava
Users that are interested in hermes-llava are comparing it to the libraries listed below
Sorting:
- The Next Generation Multi-Modality Superintelligence☆69Updated last year
- Finetune any model on HF in less than 30 seconds☆55Updated last month
- ☆26Updated 2 years ago
- ☆31Updated last year
- Unofficial implementation and experiments related to Set-of-Mark (SoM) 👁️☆87Updated 2 years ago
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fast☆150Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆165Updated last year
- GRDN.AI app for garden optimization☆70Updated last year
- ☆73Updated 2 years ago
- ☆63Updated last year
- ☆35Updated 2 years ago
- Mixing Language Models with Self-Verification and Meta-Verification☆109Updated 11 months ago
- Cerule - A Tiny Mighty Vision Model☆67Updated last week
- ☆50Updated 2 years ago
- ☆17Updated last year
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 6 months ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆69Updated 2 years ago
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆63Updated 2 years ago
- Command-line script for inferencing from models such as WizardCoder☆26Updated 2 years ago
- Small Multimodal Vision Model "Imp-v1-3b" trained using Phi-2 and Siglip.☆17Updated last year
- ☆50Updated 3 years ago
- inference code for mixtral-8x7b-32kseqlen☆102Updated last year
- ☆116Updated 11 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- Alpaca Lora☆25Updated 2 years ago
- ☆29Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- ☆20Updated last year