ItsBarryZ / Auto-distill-GPTLinks
☆13Updated 2 years ago
Alternatives and similar repositories for Auto-distill-GPT
Users that are interested in Auto-distill-GPT are comparing it to the libraries listed below
Sorting:
- Using various instructor clients evaluating the quality and capabilities of extractions and reasoning.☆51Updated last year
- Verbosity control for AI agents☆66Updated last year
- Repository of the code base for KT Generation process that we worked at Google Cloud and Searce GenAI Hackathon.☆77Updated 2 years ago
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆103Updated 2 years ago
- Doing simple retrieval from LLM models at various context lengths to measure accuracy☆108Updated 4 months ago
- ☆79Updated last year
- Use the OpenAI Batch tool to make async batch requests to the OpenAI API.☆101Updated last year
- A collection of LLM services you can self host via docker or modal labs to support your applications development☆198Updated last year
- Demo of ConversationEntityMemory in Streamlit.☆52Updated 3 years ago
- Chat Markup Language conversation library☆55Updated 2 years ago
- ☆85Updated last year
- GPT-based Conversation Summarizer☆152Updated 2 years ago
- ☆135Updated 2 years ago
- Writing Blog Posts with Generative Feedback Loops!☆50Updated last year
- ☆45Updated 2 years ago
- ☆46Updated 2 years ago
- utilities for loading and running text embeddings with onnx☆45Updated 5 months ago
- Deploy a FastHTML app in just a few lines of simple python code on Modal's serverless infra.☆26Updated last year
- ☆107Updated 2 years ago
- Just a bunch of benchmark logs for different LLMs☆119Updated last year
- Using modal.com to process FineWeb-edu data☆20Updated 10 months ago
- Text to Python Objects via a LLM Function Call☆58Updated last year
- ☆80Updated last year
- Reimplementation of the task generation part from the Alpaca paper☆119Updated 2 years ago
- ☆34Updated 2 years ago
- Recipes and resources for building, deploying, and fine-tuning generative AI with Fireworks.☆134Updated 3 weeks ago
- ☆53Updated last year
- Use OpenAI with HuggingChat by emulating the text_generation_inference_server☆44Updated 2 years ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Updated 4 months ago
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆83Updated 2 years ago