Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget
☆168Aug 11, 2025Updated 6 months ago
Alternatives and similar repositories for MicroLlama
Users that are interested in MicroLlama are comparing it to the libraries listed below
Sorting:
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆13Mar 30, 2024Updated last year
- jQuery, React and Streamlit applications written by LLMs☆16Dec 24, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,902May 3, 2024Updated last year
- ☆23Jun 13, 2024Updated last year
- ☆10Oct 2, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆11Jul 22, 2023Updated 2 years ago
- Modeling code for a BitNet b1.58 Llama-style model.☆25Apr 30, 2024Updated last year
- Implementation of DoRA☆307Jun 7, 2024Updated last year
- Simple high-throughput inference library☆155May 14, 2025Updated 9 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Oct 18, 2023Updated 2 years ago
- Code for Analyzing Redundancy in Pretrained Transformer Models accepted at EMNLP 2020☆14Oct 6, 2020Updated 5 years ago
- Text summation using python, deep learning, machine learning, transformer, huggingface, openai and langchain☆13Nov 26, 2024Updated last year
- ☆12Apr 17, 2024Updated last year
- Bamboo-7B Large Language Model☆93Mar 28, 2024Updated last year
- Train your own small bitnet model☆78Oct 20, 2024Updated last year
- This is the code that went into our practical dive using mamba as information extraction☆57Dec 22, 2023Updated 2 years ago
- ☆15Oct 31, 2023Updated 2 years ago
- Optimizing diffusion for production-ready speeds☆37Jan 10, 2026Updated last month
- ☆14Jun 25, 2025Updated 8 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Mar 7, 2025Updated last year
- Fulloch - The Fully Local Home Voice Assistant☆46Feb 10, 2026Updated last month
- A simple LLaMA implementation using MLX.☆15Apr 22, 2024Updated last year
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,206Mar 1, 2026Updated last week
- Generate Glue Code in seconds to simplify your Nvidia Triton Inference Server Deployments☆21Jul 2, 2024Updated last year
- An implementation of the base GPT-3 Model architecture from the paper by OPENAI "Language Models are Few-Shot Learners"☆20Jun 29, 2024Updated last year
- These agents work based on any local model. You ask your question and simply indicate the number of agents and experts who will answer it…☆19Feb 25, 2024Updated 2 years ago
- ☆17Mar 30, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,678Oct 28, 2024Updated last year
- Fast modular code to create and train cutting edge LLMs☆68May 16, 2024Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆989Jul 23, 2024Updated last year
- Shire Lang Spring/Java Demo project☆18Jan 14, 2025Updated last year
- Official code for the paper: "Metadata Archaeology"☆19May 10, 2023Updated 2 years ago
- The official code of "Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers"☆19Jul 24, 2024Updated last year
- Code for "Merging Text Transformers from Different Initializations"☆20Feb 2, 2025Updated last year
- Fully fine-tune large models like Mistral, Llama-2-13B, or Qwen-14B completely for free☆233Oct 31, 2024Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆297May 4, 2024Updated last year
- ☆14Jul 26, 2023Updated 2 years ago
- The objective of this project is to demonstrate how to fine-tune deepseek-r1-distill-llama-8b.☆16Feb 19, 2025Updated last year