NousResearch / finetuning-subnet
☆113Updated 7 months ago
Alternatives and similar repositories for finetuning-subnet:
Users that are interested in finetuning-subnet are comparing it to the libraries listed below
- ☆62Updated 3 weeks ago
- ☆108Updated last month
- Just a bunch of benchmark logs for different LLMs☆116Updated 5 months ago
- A framework for orchestrating AI agents using a mermaid graph☆75Updated 8 months ago
- Modular Agentic AI Architecture - NousResearch x Teleport (Flashbots)☆49Updated last week
- ☆48Updated last year
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated 8 months ago
- ☆60Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆165Updated 8 months ago
- they've simulated websites, worlds, and imaginary CLIs... but what if they simulated *you*?☆105Updated this week
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆60Updated 2 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- look how they massacred my boy☆63Updated 3 months ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆162Updated 2 weeks ago
- ☆65Updated 7 months ago
- An automated tool for discovering insights from research papaer corpora☆135Updated 7 months ago
- Scripts to create your own moe models using mlx☆85Updated 10 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆214Updated 8 months ago
- A distributed agent orchestration framework for market agents☆74Updated this week
- ☆28Updated 10 months ago
- SN1: An incentive mechanism for internet-scale conversational intelligence☆88Updated this week
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆118Updated 2 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- smolLM with Entropix sampler on pytorch☆147Updated 2 months ago
- The next evolution of Agents☆46Updated this week
- Incentivized Training over Wide Web with 1000x model compression.☆22Updated 2 months ago
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆298Updated 3 months ago