ComposioHQ / Composio-Function-Calling-BenchmarkLinks
Function Calling Benchmark & Testing
☆92Updated last year
Alternatives and similar repositories for Composio-Function-Calling-Benchmark
Users that are interested in Composio-Function-Calling-Benchmark are comparing it to the libraries listed below
Sorting:
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆91Updated 2 months ago
- ☆117Updated 11 months ago
- ☆120Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆113Updated 7 months ago
- Simple examples using Argilla tools to build AI☆56Updated last year
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆93Updated 2 months ago
- Synthetic Data for LLM Fine-Tuning☆119Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆68Updated 2 weeks ago
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆78Updated last year
- ☆164Updated 3 months ago
- ☆146Updated last year
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 10 months ago
- Solving data for LLMs - Create quality synthetic datasets!☆150Updated 10 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- Simple Graph Memory for AI applications☆89Updated 6 months ago
- Low-Rank adapter extraction for fine-tuned transformers models☆179Updated last year
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- LLM reads a paper and produce a working prototype☆58Updated 7 months ago
- ☆67Updated last year
- ReDel is a toolkit for researchers and developers to build, iterate on, and analyze recursive multi-agent systems. (EMNLP 2024 Demo)☆88Updated this week
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆123Updated last month
- Lean implementation of various multi-agent LLM methods, including Iteration of Thought (IoT)☆122Updated 9 months ago
- Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).☆103Updated last year
- Framework for building, orchestrating and deploying multi-agent systems. Managed by OpenAI Solutions team. Experimental framework.☆92Updated last year
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- autologic is a Python package that implements the SELF-DISCOVER framework proposed in the paper SELF-DISCOVER: Large Language Models Self…☆60Updated last year
- Testing speed and accuracy of RAG with, and without Cross Encoder Reranker.☆50Updated last year
- Banishing LLM Hallucinations Requires Rethinking Generalization☆275Updated last year