jeffrey-fong / InvokerLinks
The one who calls upon functions - Function-Calling Language Model
☆36Updated last year
Alternatives and similar repositories for Invoker
Users that are interested in Invoker are comparing it to the libraries listed below
Sorting:
- A fast batching API to serve LLM models☆183Updated last year
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆221Updated last year
- ☆131Updated 2 months ago
- ☆115Updated 6 months ago
- ☆157Updated 11 months ago
- Let's create synthetic textbooks together :)☆75Updated last year
- A multimodal, function calling powered LLM webui.☆214Updated 9 months ago
- Experimental LLM Inference UX to aid in creative writing☆114Updated 6 months ago
- GPT-2 small trained on phi-like data☆66Updated last year
- Client-side toolkit for using large language models, including where self-hosted☆111Updated 7 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆110Updated last year
- Easily view and modify JSON datasets for large language models☆77Updated last month
- Low-Rank adapter extraction for fine-tuned transformers models☆173Updated last year
- Locally running LLM with internet access☆95Updated last week
- For inferring and serving local LLMs using the MLX framework☆104Updated last year
- Complex RAG backend☆28Updated last year
- ☆17Updated 6 months ago
- Scripts to create your own moe models using mlx☆90Updated last year
- A guidance compatibility layer for llama-cpp-python☆35Updated last year
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated last year
- A python package for developing AI applications with local LLMs.☆150Updated 6 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆185Updated 11 months ago
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- Gradio based tool to run opensource LLM models directly from Huggingface☆93Updated last year
- entropix style sampling + GUI☆26Updated 8 months ago
- ☆135Updated last year
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆88Updated last week
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated 9 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- ☆66Updated last year