anthropics / anthropic-retrieval-demo
Lightweight demo using the Anthropic Python SDK to experiment with Claude's Search and Retrieval capabilities over a variety of knowledge bases (Elasticsearch, vector databases, web search, and Wikipedia).
☆117Updated 2 months ago
Related projects: ⓘ
- ☆254Updated 5 months ago
- This open-source repository offers reference code for integrating workplace datastores with Cohere's LLMs, enabling developers and busine…☆136Updated 2 weeks ago
- ☆60Updated last year
- 🦜💯 Flex those feathers!☆227Updated last month
- A semantic research engine to get relevant papers based on a user query. Application frontend with Chainlit Copilot. Observability with L…☆74Updated 4 months ago
- This repo is the central repo for all the RAG Evaluation reference material and partner workshop☆39Updated 2 months ago
- Automated knowledge graph creation SDK☆95Updated 2 months ago
- Open-source RAG evaluation through users' feedback☆155Updated 5 months ago
- ☆50Updated 4 months ago
- Generate Tools and Toolkits from any Python SDK -- no extra code required☆42Updated 2 months ago
- Visualize Different Text Splitting Methods☆177Updated 7 months ago
- A Ruby on Rails style framework for the DSPy (Demonstrate, Search, Predict) project for Language Models like GPT, BERT, and LLama.☆101Updated this week
- Human-AI collaboration to produce a newstory about a meeting from minutes or transcript☆150Updated 3 months ago
- An example application built with LangChain CLI and LangServe☆69Updated 8 months ago
- RAGArch is a Streamlit-based application that empowers users to experiment with various components and parameters of Retrieval-Augmented …☆77Updated 7 months ago
- Build Generative AI applications with Langchain on AWS☆169Updated last year
- Tutorial for building LLM router☆145Updated 2 months ago
- Documentation for langsmith☆76Updated this week
- ☆56Updated this week
- Analyzing chat interactions w/ LLMs to improve 🦜🔗 Langchain docs☆60Updated last year
- Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)☆72Updated last week
- ☆134Updated 8 months ago
- AutoEvals is a tool for quickly and easily evaluating AI model outputs using best practices.☆150Updated this week
- Applying Evaluation Driven Development (EDD) to aid in the design decision of RAG pipelines