NousResearch / finetuning-subnetLinks
☆122Updated last year
Alternatives and similar repositories for finetuning-subnet
Users that are interested in finetuning-subnet are comparing it to the libraries listed below
Sorting:
- ☆64Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- ☆118Updated last year
- ☆45Updated 2 years ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- look how they massacred my boy☆63Updated last year
- ☆68Updated last year
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆453Updated last year
- ☆136Updated 9 months ago
- Plotting (entropy, varentropy) for small LMs☆99Updated 7 months ago
- A tree-based prefix cache library that allows rapid creation of looms: hierarchal branching pathways of LLM generations.☆77Updated 10 months ago
- MLX port for xjdr's entropix sampler (mimics jax implementation)☆62Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Fast parallel LLM inference for MLX☆238Updated last year
- ☆28Updated last year
- Making the world's first and smartest opensource any-to-any AGI system☆44Updated last month
- ☆135Updated 2 years ago
- ☆137Updated last year
- ☆164Updated 4 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- A framework for orchestrating AI agents using a mermaid graph☆77Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆150Updated 10 months ago
- Turing machines, Rule 110, and A::B reversal using Claude 3 Opus.☆58Updated last year
- ☆125Updated last year
- A strongly typed Python DSL for developing message passing multi agent systems☆53Updated last year
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆108Updated 9 months ago
- An automated tool for discovering insights from research papaer corpora☆137Updated last year
- inference code for mixtral-8x7b-32kseqlen☆104Updated 2 years ago
- Full finetuning of large language models without large memory requirements☆94Updated 3 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year