simplifine-llm / Simplifine
π Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popular optimization frameworks. β¨
β90Updated 8 months ago
Alternatives and similar repositories for Simplifine:
Users that are interested in Simplifine are comparing it to the libraries listed below
- Kotlin bindings for Edgerunnerβ30Updated 8 months ago
- β47Updated last week
- Simplified AI runtime integration for mobile app developmentβ65Updated 4 months ago
- Open Source Auth Built on Freestyle: own your auth + data https://docs.freestyle.dev/guides/authentication/β22Updated 10 months ago
- CLI Tool for converting pydantic models into typescript definitionsβ34Updated 5 months ago
- Prompt engineering, automated.β299Updated 2 weeks ago
- β65Updated 4 months ago
- Cloudstate is a JavaScript database runtime.β175Updated 3 weeks ago
- Felafax is building AI infra for non-NVIDIA GPUsβ558Updated 2 months ago
- Rowboat monorepoβ29Updated this week
- Blockoli is a high-performance tool for code indexing, embedding generation and semantic search tool for use with LLMs.β115Updated 11 months ago
- Fine-tuning and serving LLMs on any cloudβ89Updated last year
- Text analytics for LLM apps. Cluster messages to detect use cases, outliers, power users. Detect intents and run evals with LLM (OpenAI, β¦β427Updated 2 weeks ago
- LLM Testing SDK that helps you write and run tests to monitor your LLM app in productionβ131Updated last year
- Data-Driven Evaluation for LLM-Powered Applicationsβ487Updated 2 months ago
- Model Manager is a Python package that simplifies the process of deploying an open source AI model to your own cloud.β320Updated 10 months ago
- Python SDK for running evaluations on LLM generated responsesβ276Updated last week
- Universal language-agnostic AST walking and accurate call stack generation with tree-sitter.β103Updated 7 months ago
- vscode extension to convert computationally intensive pytorch kernels to tritonβ22Updated 6 months ago
- Parameterize Python scripts/notebooks all from the command line and run on cloud GPUsβ98Updated 7 months ago
- Dockerized LLM inference server with constrained output (JSON mode), built on top of vLLM and outlines. Faster, cheaper and without rate β¦β27Updated last year
- Open source fraud and abuse prevention toolsβ206Updated 11 months ago
- LLM fine-tuning and evalβ346Updated last year
- Deploy Astro.js to freestyle.sh with cloudstate javascript object persistence.β50Updated 2 months ago
- Synthetic Data for LLM Fine-Tuningβ113Updated last year
- A lightweight logger for machine learning teams to log images and predictions in production.β152Updated last year
- Multi-language code navigation API in a containerβ74Updated 3 weeks ago
- Cedana: Access and run on compute anywhere in the world, on any provider. Migrate seamlessly between providers, arbitraging price/performβ¦β58Updated last week
- Logging and caching superpowers for the openai sdkβ104Updated last year
- β89Updated 6 months ago