gururise / openai_text_generation_inference_server
Use OpenAI with HuggingChat by emulating the text_generation_inference_server
☆43Updated last year
Alternatives and similar repositories for openai_text_generation_inference_server:
Users that are interested in openai_text_generation_inference_server are comparing it to the libraries listed below
- ☆37Updated last year
- manage histories of LLM applied applications☆88Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated last year
- An OpenAI Completions API compatible server for NLP transformers models☆65Updated last year
- Evaluate your LLM apps, RAG pipeline, any generated text, and more!Updated last year
- 🎨 Imagine what Picasso could have done with AI. Self-host your StableDiffusion API.☆50Updated last year
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆66Updated 6 months ago
- Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)☆38Updated 2 years ago
- Writing Blog Posts with Generative Feedback Loops!☆47Updated last year
- Let's create synthetic textbooks together :)☆74Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 9 months ago
- Explore the use of DSPy for extracting features from PDFs 🔎☆39Updated last year
- Conduct consumer interviews with synthetic focus groups using LLMs and LangChain☆43Updated last year
- ☆48Updated last year
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆117Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆78Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated 7 months ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆103Updated 9 months ago
- C++ inference wrappers for running blazing fast embedding services on your favourite serverless like AWS Lambda. By Prithivi Da, PRs welc…☆22Updated last year
- ☆20Updated last year
- Set of scripts to finetune LLMs☆37Updated last year
- 1-Click is all you need.☆61Updated last year
- The Next Generation Multi-Modality Superintelligence☆71Updated 8 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆104Updated 4 months ago
- 🤝 Trade any tensors over the network☆30Updated last year
- Patch for MPT-7B which allows using and training a LoRA☆58Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆34Updated 4 months ago
- A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ☆64Updated last year
- ☆18Updated 2 weeks ago