anordin95 / run-llama-locallyLinks
Run and explore Llama models locally with minimal dependencies on CPU
☆189Updated last year
Alternatives and similar repositories for run-llama-locally
Users that are interested in run-llama-locally are comparing it to the libraries listed below
Sorting:
- Docker-based inference engine for AMD GPUs☆230Updated last year
- Implement recursion using English as the programming language and an LLM as the runtime.☆236Updated 2 years ago
- Generate Cool-Looking Mazes and Animations Illustrating the A* Pathfinding Algorithm☆177Updated 8 months ago
- This project collects GPU benchmarks from various cloud providers and compares them to fixed per token costs. Use our tool for efficient …☆220Updated 10 months ago
- ai for jq☆244Updated last year
- Proof of thought : LLM-based reasoning using Z3 theorem proving with multiple backend support (SMT2 and JSON DSL)☆344Updated last week
- LLM plugin for pulling content from Hacker News☆120Updated 5 months ago
- Absolute minimalistic implementation of a GPT-like transformer using only numpy (<650 lines).☆254Updated last year
- High-Performance Implementation of OpenAI's TikToken.☆458Updated 3 months ago
- Examples and guides for using the VLM Run API☆295Updated 3 weeks ago
- A comprehensive suite of tools, built to liberate science by making the creation, evaluation, and dissemination of research more transpar…☆222Updated 2 months ago
- This is a python implementation for stitching images.☆233Updated last year
- Dead Simple LLM Abliteration☆232Updated 8 months ago
- ☆281Updated 4 months ago
- See Through Your Models☆400Updated 3 months ago
- fractal-structure inspired, parent-children orbiting, zooming-elements based interactive graph visualization user interface☆129Updated 7 months ago
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆102Updated last year
- CleverBee - The Open Source Deep Researcher Tool☆306Updated 4 months ago
- A command-line interface for LLMs written in Bash.☆435Updated 8 months ago
- Bayesian Optimization as a Coverage Tool for Evaluating LLMs. Accurate evaluation (benchmarking) that's 10 times faster with just a few l…☆285Updated last month
- TUI app- Give it a YouTube URL and you get a transcription with possible speaker identification and optional summary or translation, all …☆320Updated 6 months ago
- LLM-generated real-time commentary for Pong☆152Updated 5 months ago
- Parallel thinking for LLMs. Confidence‑gated, strategy‑driven, offline‑friendly☆257Updated last month
- Run larger LLMs with longer contexts on Apple Silicon by using differentiated precision for KV cache quantization. KVSplit enables 8-bit …☆360Updated 5 months ago
- Animating R1's thoughts.☆385Updated 8 months ago
- ☆196Updated 5 months ago
- ☆162Updated 7 months ago
- Multimodal RAG to search and interact locally with technical documents of any kind☆252Updated last week
- Add object detection, tracking, and mobile notifications to any RTSP Camera or iPhone.☆485Updated this week
- Financial instrument definitions built with Python and Pydantic☆198Updated 8 months ago