jozsefszalma / homelabLinks
The bare metal in my basement
☆21Updated last month
Alternatives and similar repositories for homelab
Users that are interested in homelab are comparing it to the libraries listed below
Sorting:
- CrabGrab+Tauri Example App☆58Updated last year
- LLaMA from First Principles☆51Updated 2 years ago
- A relatively basic implementation of RWKV in Rust written by someone with very little math and ML knowledge. Supports 32, 8 and 4 bit eva…☆94Updated 2 years ago
- Bleeding edge low level Rust binding for GGML☆16Updated last year
- A fullstack Rust + React chat app using open-source Llama language models☆33Updated 2 years ago
- Comparing performance-oriented string-processing libraries for substring search, multi-pattern matching, hashing, edit-distances, sketchi…☆136Updated 3 weeks ago
- A library for incremental loading of large PyTorch checkpoints☆56Updated 2 years ago
- Build tools for LLMs in Rust using Model Context Protocol☆38Updated 10 months ago
- Light WebUI for lm.rs☆24Updated last year
- Run LLaMA inference on CPU, with Rust 🦀🚀🦙☆24Updated 2 years ago
- A CLI in Rust to generate synthetic data for MLX friendly training☆25Updated last year
- ☆62Updated last year
- Official Rust Implementation of Model2Vec☆145Updated 3 months ago
- Prototyping the performance of various components of a theoretical faster Twitter☆67Updated 3 years ago
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆50Updated 2 years ago
- Container solution to compile Rust projects for Linux, macOS and Windows☆33Updated 2 years ago
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆64Updated 2 years ago
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated last year
- Mistral7B playing DOOM☆138Updated last year
- tvisor is a tiny 100% userspace syscall interception framework☆45Updated last year
- ☆10Updated 2 years ago
- Viznut's C-only GPT-2 implementation☆53Updated 3 years ago
- WebGPU LLM inference tuned by hand☆151Updated 2 years ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆39Updated 2 years ago
- Fast multi-producer, multi-consumer unbounded channel with async support.☆108Updated 3 years ago
- Tiny inference-only implementation of LLaMA☆92Updated last year
- inference code for mixtral-8x7b-32kseqlen☆105Updated 2 years ago
- ☆157Updated 2 years ago
- LLM tokenizer in Zig☆15Updated last month
- Implementing the BitNet model in Rust☆43Updated last year