JD-P / minihfLinks
MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user develop their prompts into full models.
☆171Updated this week
Alternatives and similar repositories for minihf
Users that are interested in minihf are comparing it to the libraries listed below
Sorting:
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆81Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- ☆92Updated last year
- smolLM with Entropix sampler on pytorch☆150Updated 7 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆171Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆238Updated last year
- A Collection of Pydantic Models to Abstract IRL☆18Updated last week
- Official repo for Learning to Reason for Long-Form Story Generation☆58Updated last month
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆139Updated 3 months ago
- ☆111Updated 5 months ago
- smol models are fun too☆92Updated 6 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- ☆48Updated last year
- Plotting (entropy, varentropy) for small LMs☆96Updated last week
- ☆49Updated last year
- Simple Transformer in Jax☆137Updated 11 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- look how they massacred my boy☆63Updated 7 months ago
- ☆83Updated 4 months ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated last year
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆171Updated 4 months ago
- inference code for mixtral-8x7b-32kseqlen☆99Updated last year
- A puzzle to learn about prompting☆127Updated 2 years ago
- The history files when recording human interaction while solving ARC tasks☆110Updated last week
- An unsupervised model merging algorithm for Transformers-based language models.☆104Updated last year
- Preprint: Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆28Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆238Updated 3 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- Just a bunch of benchmark logs for different LLMs☆118Updated 10 months ago
- Code repository for the c-BTM paper☆106Updated last year