chrisociepa / allamo
Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models
☆163Updated last week
Alternatives and similar repositories for allamo:
Users that are interested in allamo are comparing it to the libraries listed below
- ☆536Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆169Updated 9 months ago
- Tune MPTs☆84Updated last year
- Some simple scripts that I use day-to-day when working with LLMs and Huggingface Hub☆157Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- Just a simple HowTo for https://github.com/johnsmith0031/alpaca_lora_4bit☆31Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 5 months ago
- This is our own implementation of 'Layer Selective Rank Reduction'☆233Updated 8 months ago
- 4 bits quantization of LLaMa using GPTQ☆131Updated last year
- ☆456Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆150Updated last year
- ggml implementation of BERT☆481Updated 11 months ago
- Merge Transformers language models by use of gradient parameters.☆205Updated 6 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆230Updated 3 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆566Updated 7 months ago
- Reimplementation of the task generation part from the Alpaca paper☆119Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆164Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆77Updated 10 months ago
- GPT-2 small trained on phi-like data☆65Updated last year
- ☆412Updated last year
- 4 bits quantization of SantaCoder using GPTQ☆51Updated last year
- BigTranslate: Augmenting Large Language Models with Multilingual Translation Capability over 100 Languages☆221Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year