sumo43 / loopvlmLinks
run paligemma in real time
☆133Updated last year
Alternatives and similar repositories for loopvlm
Users that are interested in loopvlm are comparing it to the libraries listed below
Sorting:
- Cerule - A Tiny Mighty Vision Model☆68Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 11 months ago
- ☆112Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 2 weeks ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 7 months ago
- ☆94Updated 2 years ago
- look how they massacred my boy☆63Updated 11 months ago
- smolLM with Entropix sampler on pytorch☆150Updated 11 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- A reinforcement learning framework based on MLX.☆240Updated 3 weeks ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆170Updated last year
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.☆186Updated last year
- LLaVA server (llama.cpp).☆182Updated last year
- Swarming algorithms like PSO, Ant Colony, Sakana, and more in PyTorch 😊☆131Updated 3 weeks ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆167Updated last year
- inference code for mixtral-8x7b-32kseqlen☆101Updated last year
- A really tiny autograd engine☆95Updated 4 months ago
- ☆116Updated 9 months ago
- Testing and evaluating the capabilities of Vision-Language models (PaliGemma) in performing computer vision tasks such as object detectio…☆84Updated last year
- GRDN.AI app for garden optimization☆70Updated last year
- Run GGML models with Kubernetes.☆174Updated last year
- ☆89Updated last year
- Efficient vector database for hundred millions of embeddings.☆208Updated last year
- ☆46Updated last year
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆322Updated 11 months ago
- ☆123Updated last year
- a small code base for training large models☆308Updated 5 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆105Updated 6 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year