Abhi0323 / Fine-Tuning-LLaMA-2-with-QLORA-and-PEFT
View external linksLinks

This project enhances the LLaMA-2 model using Quantized Low-Rank Adaptation (QLoRA) and other parameter-efficient fine-tuning techniques to optimize its performance for specific NLP tasks. The improved model is demonstrated through a Streamlit application, showcasing its capabilities in real-time interactive settings.
13Apr 18, 2024Updated last year

Alternatives and similar repositories for Fine-Tuning-LLaMA-2-with-QLORA-and-PEFT

Users that are interested in Fine-Tuning-LLaMA-2-with-QLORA-and-PEFT are comparing it to the libraries listed below

Sorting:

Are these results useful?