NousResearch / finetuning-subnet
☆113Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for finetuning-subnet
- ☆62Updated this week
- ☆104Updated 8 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- ☆118Updated 3 months ago
- ☆48Updated last year
- look how they massacred my boy☆58Updated last month
- Just a bunch of benchmark logs for different LLMs☆115Updated 3 months ago
- KMD is a collection of conversational exchanges between patients and doctors on various medical topics. It aims to capture the intricaci…☆23Updated last year
- ☆64Updated 5 months ago
- Easy-to-use agent memory, powered by chromadb and postgres☆75Updated last year
- Simple examples using Argilla tools to build AI☆40Updated this week
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆203Updated 6 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated 10 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆152Updated this week
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆60Updated 6 months ago
- A Python library to orchestrate LLMs in a neural network-inspired structure☆41Updated last month
- ☆93Updated last month
- RAFT, or Retrieval-Augmented Fine-Tuning, is a method comprising of a fine-tuning and a RAG-based retrieval phase. It is particularly sui…☆75Updated 2 months ago
- The next evolution of Agents☆46Updated last week
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆113Updated 3 weeks ago
- ☆28Updated 8 months ago
- Modified Stanford-Alpaca Trainer for Training Replit's Code Model☆40Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated 6 months ago
- Set of scripts to finetune LLMs☆36Updated 7 months ago
- A new benchmark for measuring LLM's capability to detect bugs in large codebase.☆27Updated 5 months ago
- Full finetuning of large language models without large memory requirements☆93Updated 10 months ago
- Low-Rank adapter extraction for fine-tuned transformers model☆162Updated 6 months ago
- An automated tool for discovering insights from research papaer corpora☆135Updated 5 months ago
- inference code for mixtral-8x7b-32kseqlen☆98Updated 11 months ago