graelo / pumas
Power Usage Monitor for Apple Silicon
β128Updated 2 months ago
Related projects β
Alternatives and complementary repositories for pumas
- π¦βοΈ Sudoless performance monitoring for Apple Silicon processors. CPU / GPU / RAM usage, power consumption & temperature π‘οΈβ303Updated this week
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.β55Updated last year
- NviWatch: A blazingly fast rust based TUI for managing and monitoring NVIDIA GPU processesβ176Updated 2 months ago
- A Fish Speech implementation in Rust, with Candle.rsβ45Updated this week
- Rust port of llm.c by @karpathyβ38Updated 7 months ago
- Rust implementation of Suryaβ52Updated last month
- 8-bit floating point types for Rustβ39Updated last month
- Fast command line app in rust/tokio to run commands in parallel. Similar interface to GNU parallel or xargs plus useful features. Listeβ¦β163Updated 2 months ago
- Unofficial Rust bindings to Apple's mlx frameworkβ68Updated this week
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rustβ80Updated 10 months ago
- bott: Your Terminal Copilotβ84Updated 9 months ago
- Network Top -- Help you monitor network traffic with bpfβ175Updated last week
- Run Linux ELF binary directly on macOS via hypervisor.frameworkβ38Updated 6 months ago
- Disk analyser and cleanup toolβ102Updated 5 months ago
- Run commands in the languages you love!β111Updated last month
- C API for MLXβ79Updated this week
- The fastest CLI tool for prompting LLMs. Including support for prompting several LLMs at once!β62Updated 2 months ago
- How fast can we do simple math on 1 billion rows of input?β39Updated 10 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibesβ133Updated 3 weeks ago
- β136Updated 9 months ago
- Hold on tightβ109Updated last week
- β22Updated this week
- TensorRT-LLM server with Structured Outputs (JSON) built with Rustβ13Updated last week
- Quick File Copy using QUICβ154Updated this week
- β‘ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), β¦β339Updated 5 months ago
- Neural search for web-sites, docs, articles - online!β128Updated 3 weeks ago
- β45Updated 11 months ago
- Process Interactive Killβ181Updated last week
- LLM training in simple, raw C/CUDA, migrated into Rustβ35Updated last week
- A high-level profiler for process-level events such as fork, exec, exit, setpgid, and setsidβ41Updated last month