FareedKhan-dev / train-llama4Links
Building LLaMA 4 MoE from Scratch
☆72Updated 9 months ago
Alternatives and similar repositories for train-llama4
Users that are interested in train-llama4 are comparing it to the libraries listed below
Sorting:
- Composition of Multimodal Language Models From Scratch☆15Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆197Updated last year
- Implementation of a GPT-4o like Multimodal from Scratch using Python☆76Updated 9 months ago
- minimal GRPO implementation from scratch☆102Updated 10 months ago
- Maximizing the Performance of a Simple RAG using RL☆90Updated 10 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆98Updated last year
- ☆45Updated 8 months ago
- An overview of GRPO & DeepSeek-R1 Training with Open Source GRPO Model Fine Tuning☆37Updated 8 months ago
- Building a 2.3M-parameter LLM from scratch with LLaMA 1 architecture.☆196Updated last year
- From scratch implementation of a vision language model in pure PyTorch☆253Updated last year
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Updated 10 months ago
- Tina: Tiny Reasoning Models via LoRA☆314Updated 3 months ago
- A Straightforward, Step-by-Step Implementation of a Video Diffusion Model☆72Updated 5 months ago
- Fine tune Gemma 3 on an object detection task☆95Updated 6 months ago
- Distributed training (multi-node) of a Transformer model☆91Updated last year
- Learn the building blocks of how to build gpt-oss from scratch☆111Updated 3 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Updated last year
- ☆104Updated 9 months ago
- Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.☆57Updated last year
- (ICCV 2025) OCR Hinders RAG: Evaluating the Cascading Impact of OCR on Retrieval-Augmented Generation☆95Updated last month
- ☆73Updated 6 months ago
- Self-host LLMs with vLLM and BentoML☆163Updated this week
- Inference, Fine Tuning and many more recipes with Gemma family of models☆277Updated 6 months ago
- Fine-tuning large language models (LLMs) is crucial for enhancing performance across domain-specific task applications. This comprehensiv…☆12Updated last year
- ☆93Updated 8 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆96Updated 8 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- This is an NVIDIA AI Workbench example project that demonstrates an end-to-end model development workflow using Llamafactory.☆72Updated last year
- [EMNLP 2025] The official implementation for paper "Agentic-R1: Distilled Dual-Strategy Reasoning"☆102Updated 4 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆222Updated 5 months ago