mirzayasirabdullahbaig07 / Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRA
View external linksLinks

This project showcases parameter-efficient fine-tuning of the LLaMA 3.2 (3B) language model using PEFT (Parameter-Efficient Fine-Tuning) and LoRA (Low-Rank Adaptation). It is optimized for minimal resource usage and trained on a domain-specific dataset to enhance performance in specialized tasks.
18Jun 1, 2025Updated 8 months ago

Alternatives and similar repositories for Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRA

Users that are interested in Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRA are comparing it to the libraries listed below

Sorting:

Are these results useful?