mallik3006 / LLM_fine_tuning_llama3_8bLinks
Fine-Tuning Llama3-8B LLM in a multi-GPU environment using DeepSpeed
β19Updated last year
Alternatives and similar repositories for LLM_fine_tuning_llama3_8b
Users that are interested in LLM_fine_tuning_llama3_8b are comparing it to the libraries listed below
Sorting:
- Lightweight demos for finetuning LLMs. Powered by π€ transformers and open-source datasets.β78Updated last year
- LoRA and DoRA from Scratch Implementationsβ214Updated last year
- minimal GRPO implementation from scratchβ99Updated 8 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPOβ116Updated last year
- Research projects built on top of Transformersβ100Updated 8 months ago
- Let's build better datasets, together!β264Updated 11 months ago
- Building LLaMA 4 MoE from Scratchβ68Updated 7 months ago
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.β372Updated 4 months ago
- β124Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Mβ¦β242Updated last year
- β78Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creationβ111Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paperβ152Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for freeβ231Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.β33Updated 2 months ago
- Efficient Finetuning for OpenAI GPT-OSSβ22Updated last month
- LLM Workshop by Sourab Mangrulkarβ395Updated last year
- Set of scripts to finetune LLMsβ38Updated last year
- [ACL'25] Official Code for LlamaDuo: LLMOps Pipeline for Seamless Migration from Service LLMs to Small-Scale Local LLMsβ314Updated 4 months ago
- πΉοΈ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.β139Updated last year
- This repository contains the code for dataset curation and finetuning of instruct variant of the Bilingual OpenHathi model. The resultinβ¦β23Updated last year
- Pre-training code for Amber 7B LLMβ169Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.β190Updated last year
- β52Updated last year
- β138Updated 2 months ago
- Pretraining and finetuning for visual instruction following with Mixture of Expertsβ16Updated last year
- The Universe of Evaluation. All about the evaluation for LLMs.β229Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorchβ117Updated 2 years ago
- experiments with inference on llamaβ103Updated last year
- This project is a collection of fine-tuning scripts to help researchers fine-tune Qwen 2 VL on HuggingFace datasets.β77Updated 4 months ago