FareedKhan-dev / train-llama4Links
Building LLaMA 4 MoE from Scratch
☆72Updated 9 months ago
Alternatives and similar repositories for train-llama4
Users that are interested in train-llama4 are comparing it to the libraries listed below
Sorting:
- Implementation of a GPT-4o like Multimodal from Scratch using Python☆77Updated 10 months ago
- Maximizing the Performance of a Simple RAG using RL☆90Updated 10 months ago
- minimal GRPO implementation from scratch☆102Updated 10 months ago
- An overview of GRPO & DeepSeek-R1 Training with Open Source GRPO Model Fine Tuning☆37Updated 8 months ago
- ☆45Updated 9 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆98Updated last year
- RL significantly the reasoning capability of Qwen2.5-1.5B-Instruct☆31Updated 11 months ago
- From scratch implementation of a vision language model in pure PyTorch☆254Updated last year
- Composition of Multimodal Language Models From Scratch☆15Updated last year
- Building a 2.3M-parameter LLM from scratch with LLaMA 1 architecture.☆197Updated last year
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆200Updated last year
- ☆107Updated 10 months ago
- First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting…☆181Updated 6 months ago
- A Straightforward, Step-by-Step Implementation of a Video Diffusion Model☆75Updated 5 months ago
- ☆80Updated 6 months ago
- [ICLR 2026] Tina: Tiny Reasoning Models via LoRA☆319Updated 4 months ago
- Parameter-efficient finetuning script for Phi-3-vision, the strong multimodal language model by Microsoft.☆57Updated last year
- Utils for Unsloth https://github.com/unslothai/unsloth☆191Updated this week
- Fine tune Gemma 3 on an object detection task☆97Updated 6 months ago
- ☆74Updated 8 months ago
- An extension of the nanoGPT repository for training small MOE models.☆236Updated 11 months ago
- Inference, Fine Tuning and many more recipes with Gemma family of models☆279Updated 6 months ago
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆53Updated last year
- Distributed training (multi-node) of a Transformer model☆94Updated last year
- Benchmarking the serving capabilities of vLLM☆59Updated last year
- 🎈 A series of lightweight GPT models featuring TinyGPT Base (~51M params) and TinyGPT-MoE (~85M params). Fast, creative text generation …☆15Updated 2 months ago
- Notes and commented code for RLHF (PPO)☆124Updated last year
- Self-host LLMs with vLLM and BentoML☆168Updated 3 weeks ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆97Updated 9 months ago
- A straightforward method for training your LLM, from downloading data to generating text.☆512Updated 6 months ago