mirzayasirabdullahbaig07 / Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRAView on GitHub
This project showcases parameter-efficient fine-tuning of the LLaMA 3.2 (3B) language model using PEFT (Parameter-Efficient Fine-Tuning) and LoRA (Low-Rank Adaptation). It is optimized for minimal resource usage and trained on a domain-specific dataset to enhance performance in specialized tasks.
18Jun 1, 2025Updated 10 months ago

Alternatives and similar repositories for Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRA

Users that are interested in Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.

Sorting:

Are these results useful?