evintunador / minLlama3
a simplified version of Meta's Llama 3 model to be used for learning
☆39Updated 8 months ago
Alternatives and similar repositories for minLlama3:
Users that are interested in minLlama3 are comparing it to the libraries listed below
- From scratch implementation of a vision language model in pure PyTorch☆194Updated 9 months ago
- LLaMA 2 implemented from scratch in PyTorch☆292Updated last year
- Notes and commented code for RLHF (PPO)☆69Updated 11 months ago
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆138Updated 5 months ago
- Tutorial for how to build BERT from scratch☆87Updated 8 months ago
- Distributed training (multi-node) of a Transformer model☆53Updated 10 months ago
- A project to improve skills of large language models☆247Updated this week
- Training and Fine-tuning an llm in Python and PyTorch.☆41Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆102Updated 4 months ago
- Implementation of DoRA☆290Updated 8 months ago
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆141Updated 10 months ago
- ☆125Updated last month
- Comprehensive toolkit for Reinforcement Learning from Human Feedback (RLHF) training, featuring instruction fine-tuning, reward model tra…☆136Updated 10 months ago
- ☆251Updated last year
- LoRA and DoRA from Scratch Implementations☆195Updated 11 months ago
- Direct Preference Optimization from scratch in PyTorch☆78Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆290Updated 2 months ago
- Official repository for ORPO☆435Updated 8 months ago
- ☆496Updated 2 months ago
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆94Updated last year
- LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA☆199Updated last year
- This repository contains an implementation of the LLaMA 2 (Large Language Model Meta AI) model, a Generative Pretrained Transformer (GPT)…☆59Updated last year
- Code for Quiet-STaR☆711Updated 5 months ago
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆286Updated 9 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆441Updated 5 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆124Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆320Updated 3 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆347Updated 5 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆173Updated 5 months ago
- customizable template GPT code designed for easy novel architecture experimentation☆26Updated 3 months ago