AIAnytime / Phi-2-Fine-TuningLinks
Phi-2 Fine Tuning to build a mental health GPT.
☆11Updated last year
Alternatives and similar repositories for Phi-2-Fine-Tuning
Users that are interested in Phi-2-Fine-Tuning are comparing it to the libraries listed below
Sorting:
- Medical Mixture of Experts LLM using Mergekit.☆20Updated last year
- ☆11Updated last year
- A forest of autonomous agents.☆19Updated 5 months ago
- Metadata Enrichment using KeyBERT for advanced and improved RAG.☆10Updated last year
- This repo lets you run mistral-7b in Google Colab.☆16Updated last year
- Explore the use of DSPy for extracting features from PDFs 🔎☆43Updated last year
- ☆22Updated last year
- Exploration using DSPy to optimize modules to maximize performance on the OpenToM dataset☆16Updated last year
- Using GPT-3 and Carrot (GPT-3 for computer vision) to create detailed descriptions of images.☆13Updated 3 years ago
- Repository of the code base for KT Generation process that we worked at Google Cloud and Searce GenAI Hackathon.☆74Updated last year
- Transform unstructured documents into actionable, structured data with enterprise-grade precision and reliability, ready for large-scale …☆19Updated last week
- ☆20Updated last year
- Writing Blog Posts with Generative Feedback Loops!☆49Updated last year
- This repository implements DSPy programs to tasks in Indian Languages☆13Updated last year
- ☆12Updated 2 months ago
- Experimenting text-embeddings-inference server on both CPU and GPU☆18Updated last year
- The original BabyAGI, updated with LiteLLM and no vector database reliance (csv instead)☆21Updated 9 months ago