NousResearch / Obsidian
Maybe the new state of the art vision model? we'll see π€·ββοΈ
β153Updated 9 months ago
Related projects β
Alternatives and complementary repositories for Obsidian
- an implementation of Self-Extend, to expand the context window via grouped attentionβ118Updated 10 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ219Updated last week
- Cerule - A Tiny Mighty Vision Modelβ67Updated 2 months ago
- β116Updated 2 months ago
- Full finetuning of large language models without large memory requirementsβ93Updated 10 months ago
- Low-Rank adapter extraction for fine-tuned transformers modelβ162Updated 6 months ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ77Updated 6 months ago
- β64Updated 5 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β169Updated last week
- This is our own implementation of 'Layer Selective Rank Reduction'β231Updated 5 months ago
- Vision Document Retrieval (ViDoRe): Benchmark. Evaluation code for the ColPali paper.β126Updated this week
- GRDN.AI app for garden optimizationβ69Updated 9 months ago
- β148Updated 3 months ago
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.β176Updated 7 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAIβ222Updated 6 months ago
- β103Updated 7 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)β273Updated last month
- Video+code lecture on building nanoGPT from scratchβ64Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.β81Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokensβ105Updated last week
- β37Updated 11 months ago
- A bagel, with everything.β312Updated 6 months ago
- Fast parallel LLM inference for MLXβ146Updated 4 months ago
- Framework agnostic computer vision inference.β118Updated this week
- An all-new Language Model That Processes Ultra-Long Sequences of 100,000+ Ultra-Fastβ137Updated 2 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the userβ¦β151Updated this week
- Modified Stanford-Alpaca Trainer for Training Replit's Code Modelβ40Updated last year
- RAFT, or Retrieval-Augmented Fine-Tuning, is a method comprising of a fine-tuning and a RAG-based retrieval phase. It is particularly suiβ¦β74Updated 2 months ago
- An automated tool for discovering insights from research papaer corporaβ135Updated 5 months ago
- β62Updated last month