AblateIt / finetune-studyLinks
Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.
☆83Updated 2 years ago
Alternatives and similar repositories for finetune-study
Users that are interested in finetune-study are comparing it to the libraries listed below
Sorting:
- ☆94Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 4 months ago
- ☆22Updated 2 years ago
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated last year
- ☆45Updated 2 years ago
- ☆95Updated 2 years ago
- MiniHF is an inference, human preference data collection, and fine-tuning tool for local language models. It is intended to help the user…☆183Updated 2 months ago
- Multi-Domain Expert Learning☆67Updated 2 years ago
- ☆86Updated 2 years ago
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated 2 years ago
- ☆198Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆279Updated last year
- An introduction to LLM Sampling☆79Updated last year
- This is our own implementation of 'Layer Selective Rank Reduction'☆240Updated last year
- inference code for mixtral-8x7b-32kseqlen☆105Updated 2 years ago
- ☆23Updated 2 years ago
- Zeus LLM Trainer is a rewrite of Stanford Alpaca aiming to be the trainer for all Large Language Models☆70Updated 2 years ago
- a pipeline for using api calls to agnostically convert unstructured data into structured training data☆32Updated last year
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆107Updated 4 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Updated 4 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆180Updated last year
- ☆416Updated 2 years ago
- ☆137Updated last year
- A bagel, with everything.☆326Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆204Updated last year
- Fast & more realistic evaluation of chat language models. Includes leaderboard.☆190Updated 2 years ago
- clean up your LLM datasets☆114Updated 2 years ago