NousResearch / Obsidian
Maybe the new state of the art vision model? we'll see π€·ββοΈ
β154Updated 10 months ago
Related projects β
Alternatives and complementary repositories for Obsidian
- Cerule - A Tiny Mighty Vision Modelβ67Updated 2 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'β232Updated 5 months ago
- QLoRA: Efficient Finetuning of Quantized LLMsβ77Updated 7 months ago
- Full finetuning of large language models without large memory requirementsβ93Updated 10 months ago
- Embed arbitrary modalities (images, audio, documents, etc) into large language models.β176Updated 7 months ago
- Low-Rank adapter extraction for fine-tuned transformers modelβ162Updated 6 months ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ221Updated 3 weeks ago
- β104Updated 8 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.β81Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vectoβ¦β203Updated 6 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAIβ221Updated 6 months ago
- an implementation of Self-Extend, to expand the context window via grouped attentionβ118Updated 10 months ago
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokensβ113Updated 3 weeks ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β180Updated 3 weeks ago
- β118Updated 3 months ago
- Video+code lecture on building nanoGPT from scratchβ64Updated 5 months ago
- β149Updated 4 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRAβ124Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hubβ155Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first appβ¦β161Updated 10 months ago
- Just a bunch of benchmark logs for different LLMsβ115Updated 3 months ago
- run paligemma in real timeβ123Updated 6 months ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Modelβ40Updated last year
- A pipeline parallel training script for LLMs.β83Updated this week
- Tune MPTsβ84Updated last year
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the userβ¦β152Updated this week
- Set of scripts to finetune LLMsβ36Updated 7 months ago
- β64Updated 5 months ago
- β93Updated last month
- The Next Generation Multi-Modality Superintelligenceβ70Updated 2 months ago