uygarkurt / Llama-3-PyTorchLinks
☆35Updated 7 months ago
Alternatives and similar repositories for Llama-3-PyTorch
Users that are interested in Llama-3-PyTorch are comparing it to the libraries listed below
Sorting:
- LLaMA 3 is one of the most promising open-source model after Mistral, we will recreate it's architecture in a simpler manner.☆177Updated last year
- A repository of Python scripts to scrape code contents of the public repositories of `huggingface`.☆53Updated last year
- ☆88Updated last year
- LORA: Low-Rank Adaptation of Large Language Models implemented using PyTorch☆112Updated 2 years ago
- ☆43Updated 2 months ago
- ☆44Updated 3 months ago
- Experimenting with small language models☆70Updated last year
- Collection of autoregressive model implementation☆86Updated 4 months ago
- Reference implementation of Mistral AI 7B v0.1 model.☆28Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆48Updated 11 months ago
- minimal GRPO implementation from scratch☆96Updated 5 months ago
- Various installation guides for Large Language Models☆72Updated 4 months ago
- Building a 2.3M-parameter LLM from scratch with LLaMA 1 architecture.☆182Updated last year
- Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget☆157Updated last week
- ☆33Updated last year
- customizable template GPT code designed for easy novel architecture experimentation☆26Updated 5 months ago
- Notes about "Attention is all you need" video (https://www.youtube.com/watch?v=bCz4OMemCcA)☆301Updated 2 years ago
- An overview of GRPO & DeepSeek-R1 Training with Open Source GRPO Model Fine Tuning☆34Updated 3 months ago
- Distributed training (multi-node) of a Transformer model☆77Updated last year
- Implementation of a GPT-4o like Multimodal from Scratch using Python☆69Updated 4 months ago
- Swarming algorithms like PSO, Ant Colony, Sakana, and more in PyTorch 😊☆129Updated last week
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- This is the code that went into our practical dive using mamba as information extraction☆53Updated last year
- GPU Kernels☆193Updated 3 months ago
- ☆54Updated 6 months ago
- RAGs: Simple implementations of Retrieval Augmented Generation (RAG) Systems☆124Updated 7 months ago
- LLaMA 2 implemented from scratch in PyTorch☆347Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆232Updated 9 months ago
- a LLM cookbook, for building your own from scratch, all the way from gathering data to training a model☆151Updated last year