mallik3006 / LLM_fine_tuning_llama3_8bLinks
Fine-Tuning Llama3-8B LLM in a multi-GPU environment using DeepSpeed
☆18Updated last year
Alternatives and similar repositories for LLM_fine_tuning_llama3_8b
Users that are interested in LLM_fine_tuning_llama3_8b are comparing it to the libraries listed below
Sorting:
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- A collection of LogitsProcessors to customize and enhance LLM behavior for specific tasks.☆375Updated 5 months ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆152Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆112Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆244Updated last year
- minimal GRPO implementation from scratch☆100Updated 9 months ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆192Updated last year
- Code for KaLM-Embedding models☆103Updated 5 months ago
- Let's build better datasets, together!☆265Updated 11 months ago
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Updated 9 months ago
- ☆78Updated last year
- ☆81Updated last year
- Distributed training (multi-node) of a Transformer model☆89Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- ☆103Updated 8 months ago
- Notebook and Scripts that showcase running quantized diffusion models on consumer GPUs☆38Updated last year
- Pretraining and finetuning for visual instruction following with Mixture of Experts☆16Updated last year
- Testing DeepSpeed integration in 🤗 Accelerate☆11Updated 3 years ago
- Research projects built on top of Transformers☆104Updated 9 months ago
- ☆48Updated last year
- Building LLaMA 4 MoE from Scratch☆68Updated 8 months ago
- ☆52Updated last year
- Fine-tune ModernBERT on a large Dataset with Custom Tokenizer Training☆74Updated last month
- This project is a collection of fine-tuning scripts to help researchers fine-tune Qwen 2 VL on HuggingFace datasets.☆77Updated 5 months ago
- Efficient Finetuning for OpenAI GPT-OSS☆22Updated 2 months ago
- Various installation guides for Large Language Models☆77Updated 7 months ago
- An extension of the nanoGPT repository for training small MOE models.☆216Updated 9 months ago
- Set of scripts to finetune LLMs☆38Updated last year