maxbbraun / llama4microLinks
A "large" language model running on a microcontroller
☆543Updated 2 years ago
Alternatives and similar repositories for llama4micro
Users that are interested in llama4micro are comparing it to the libraries listed below
Sorting:
- Instructions on how to run LLMs on Raspberry PI☆208Updated last year
- Inference Llama 2 in one file of pure Python☆424Updated 3 weeks ago
- Llama 2 Everywhere (L2E)☆1,521Updated 3 months ago
- run paligemma in real time☆133Updated last year
- a small code base for training large models☆315Updated 7 months ago
- An mlx project to train a base model on your whatsapp chats using (Q)Lora finetuning☆171Updated last year
- Running a LLM on the ESP32☆432Updated last year
- llama.cpp with BakLLaVA model describes what does it see☆379Updated 2 years ago
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆353Updated last year
- Let's make sand talk☆591Updated 2 years ago
- llama3.np is a pure NumPy implementation for Llama 3 model.☆992Updated 7 months ago
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆569Updated 2 years ago
- gpt-2 from scratch in mlx☆404Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆718Updated 2 years ago
- The repository for the code of the UltraFastBERT paper☆520Updated last year
- Alex Krizhevsky's original code from Google Code☆197Updated 9 years ago
- ☆863Updated 2 years ago
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- Understanding large language models☆120Updated 2 years ago
- Efficient Inference of Transformer models☆471Updated last year
- ☆96Updated last year
- A really tiny autograd engine☆96Updated 6 months ago
- throwaway GPT inference☆141Updated last year
- Following Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- port of Andrjey Karpathy's llm.c to Mojo☆360Updated 4 months ago
- An implementation of bucketMul LLM inference☆223Updated last year
- Run GGML models with Kubernetes.☆175Updated last year
- ☆1,028Updated last year
- Mistral7B playing DOOM☆138Updated last year
- 1.58 Bit LLM on Apple Silicon using MLX☆226Updated last year