mallik3006 / LLM_fine_tuning_llama3_8bLinks
Fine-Tuning Llama3-8B LLM in a multi-GPU environment using DeepSpeed
β19Updated last year
Alternatives and similar repositories for LLM_fine_tuning_llama3_8b
Users that are interested in LLM_fine_tuning_llama3_8b are comparing it to the libraries listed below
Sorting:
- LoRA and DoRA from Scratch Implementationsβ215Updated last year
- Lightweight demos for finetuning LLMs. Powered by π€ transformers and open-source datasets.β77Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.β382Updated 6 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.β32Updated 4 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creationβ115Updated last year
- Let's build better datasets, together!β269Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.β197Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPOβ116Updated 2 years ago
- Set of scripts to finetune LLMsβ37Updated last year
- minimal GRPO implementation from scratchβ102Updated 10 months ago
- Research projects built on top of Transformersβ110Updated 10 months ago
- Code for NeurIPS LLM Efficiency Challengeβ60Updated last year
- β140Updated 5 months ago
- This project is a collection of fine-tuning scripts to help researchers fine-tune Qwen 2 VL on HuggingFace datasets.β77Updated 6 months ago
- Code for KaLM-Embedding modelsβ112Updated 7 months ago
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMsβ314Updated 6 months ago
- πΉοΈ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.β138Updated last year
- β52Updated last year
- Efficient Finetuning for OpenAI GPT-OSSβ23Updated 3 months ago
- β48Updated last year
- An extension of the nanoGPT repository for training small MOE models.β233Updated 10 months ago
- Complete implementation of Llama2 with/without KV cache & inference πβ49Updated last year
- A simple implementation of Llama 1, 2. Llama Architecture built from scratch using PyTorch all the models are built from scratch that incβ¦β13Updated last year
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUsβ38Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paperβ150Updated last year
- β125Updated last year
- Fine tune Gemma 3 on an object detection taskβ96Updated 6 months ago
- Pretraining and finetuning for visual instruction following with Mixture of Expertsβ16Updated 2 years ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β249Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ232Updated last year