autonomi-ai / nosLinks
⚡️ A fast and flexible PyTorch inference server that runs locally, on any cloud or AI HW.
☆146Updated last year
Alternatives and similar repositories for nos
Users that are interested in nos are comparing it to the libraries listed below
Sorting:
- ☆198Updated last year
- Vector Database with support for late interaction and token level embeddings.☆54Updated 5 months ago
- 🕹️ Performance Comparison of MLOps Engines, Frameworks, and Languages on Mainstream AI Models.☆139Updated last year
- TitanML Takeoff Server is an optimization, compression and deployment platform that makes state of the art machine learning models access…☆114Updated last year
- Aana SDK is a powerful framework for building AI enabled multimodal applications.☆53Updated 3 months ago
- GRDN.AI app for garden optimization☆69Updated 3 weeks ago
- ☆114Updated last year
- ☆89Updated last year
- GPU prices aggregator for cloud providers☆44Updated this week
- run paligemma in real time☆133Updated last year
- A curated list of amazingly awesome Modal applications, demos, and shiny things. Inspired by awesome-php.☆166Updated last month
- Replace expensive LLM calls with finetunes automatically☆66Updated last year
- Pipeline is an open source python SDK for building AI/ML workflows☆138Updated last year
- Maybe the new state of the art vision model? we'll see 🤷♂️☆167Updated last year
- Python client library for improving your LLM app accuracy☆97Updated 10 months ago
- Cerule - A Tiny Mighty Vision Model☆68Updated last month
- Efficient vector database for hundred millions of embeddings.☆211Updated last year
- Fine-tuning and serving LLMs on any cloud☆90Updated 2 years ago
- Self-host LLMs with vLLM and BentoML☆161Updated 2 weeks ago
- LLaVA server (llama.cpp).☆183Updated 2 years ago
- Foyle is a copilot to help developers deploy and operate their applications.☆132Updated 8 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆146Updated 2 years ago
- Start a server from the MLX library.☆194Updated last year
- Run GGML models with Kubernetes.☆175Updated last year
- ☆38Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆53Updated 2 years ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Full finetuning of large language models without large memory requirements☆94Updated 2 months ago
- A collection of LLM services you can self host via docker or modal labs to support your applications development☆198Updated last year