rmihaylov / falcontune
Tune any FALCON in 4-bit
☆466Updated last year
Alternatives and similar repositories for falcontune:
Users that are interested in falcontune are comparing it to the libraries listed below
- Customizable implementation of the self-instruct paper.☆1,035Updated 10 months ago
- ☆536Updated last year
- ☆456Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆704Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆714Updated 7 months ago
- Code for fine-tuning Platypus fam LLMs using LoRA☆625Updated 11 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆419Updated last year
- Repo for fine-tuning Casual LLMs☆453Updated 9 months ago
- A bagel, with everything.☆315Updated 9 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆684Updated 9 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆815Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆562Updated 6 months ago
- OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA☆301Updated last year
- batched loras☆336Updated last year
- [ACL2023] We introduce LLM-Blender, an innovative ensembling framework to attain consistently superior performance by leveraging the dive…☆902Updated 2 months ago
- This repository contains code for extending the Stanford Alpaca synthetic instruction tuning to existing instruction-tuned models such as…☆350Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆439Updated 8 months ago
- ☆413Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated last year
- Official repository for LongChat and LongEval☆518Updated 7 months ago
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆409Updated last year
- YaRN: Efficient Context Window Extension of Large Language Models☆1,398Updated 9 months ago
- ☆537Updated last month
- ☆267Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- Alpaca dataset from Stanford, cleaned and curated☆1,531Updated last year
- ☆440Updated last year
- Generate textbook-quality synthetic LLM pretraining data☆492Updated last year
- Merge Transformers language models by use of gradient parameters.☆202Updated 5 months ago