GaiaNet-AI / node-configs
☆21Updated last week
Related projects ⓘ
Alternatives and complementary repositories for node-configs
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆10Updated this week
- Embed WasmEdge functions in a Rust host app☆30Updated last month
- Rust library to access openai API☆15Updated 11 months ago
- A list of flow functions☆33Updated last year
- Rust port of llm.c by @karpathy☆38Updated 7 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆27Updated 6 months ago
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆32Updated last week
- An extensible CLI for integrating LLM models with a flexible scripting system☆22Updated 4 months ago
- Rust LLM Stream Analyzer and Content Generator☆16Updated 6 months ago
- A RAG API server written in Rust following OpenAI specs☆31Updated last week
- Lightweight web service clients in the WasmEdge Runtime using the Rust reqwest framework☆12Updated 4 months ago
- Rust implementation of Surya☆52Updated last month
- Implementing the BitNet model in Rust☆28Updated 7 months ago
- memchr vs stringzilla - up to 7x throughput difference between two SIMD-accelerated substring search libraries in Rust☆45Updated 7 months ago
- ☆26Updated last year
- A distributed execution framework built upon lunatic.☆16Updated 10 months ago
- Proof of concept for a generative AI application framework powered by WebAssembly and Extism☆14Updated last year
- The AI agent script CLI for Programmable Prompt Engine.☆26Updated last month
- Run Generative AI models directly on your hardware☆22Updated 3 months ago
- AI gateway and observability server written in Rust. Designed to help optimize multi-agent workflows.☆46Updated 4 months ago
- Flow function examples for flows.network☆25Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆80Updated 10 months ago
- A Fish Speech implementation in Rust, with Candle.rs☆45Updated this week
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆15Updated last week
- Rust cli tool for running multiple commands in parallel☆13Updated last month
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.☆36Updated last month
- An ecosystem of Rust libraries for working with large language models☆11Updated last year
- Run LLaMA inference on CPU, with Rust 🦀🚀🦙☆20Updated last year
- Command Agent runner to accelerate production coding. File based, fully customizable, NOT for building snake games.☆34Updated this week