gururise / openai_text_generation_inference_serverLinks
Use OpenAI with HuggingChat by emulating the text_generation_inference_server
☆44Updated 2 years ago
Alternatives and similar repositories for openai_text_generation_inference_server
Users that are interested in openai_text_generation_inference_server are comparing it to the libraries listed below
Sorting:
- manage histories of LLM applied applications☆91Updated 2 years ago
- ☆37Updated 2 years ago
- Sakura-SOLAR-DPO: Merge, SFT, and DPO☆116Updated 2 years ago
- 🎨 Imagine what Picasso could have done with AI. Self-host your StableDiffusion API.☆50Updated 2 years ago
- Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA☆104Updated 7 months ago
- HuggingChat like UI in Gradio☆70Updated 2 years ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆72Updated last year
- QLoRA with Enhanced Multi GPU Support☆37Updated 2 years ago
- ☆33Updated 2 years ago
- ☆172Updated 11 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆32Updated 3 months ago
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆116Updated 2 years ago
- Notus is a collection of fine-tuned LLMs using SFT, DPO, SFT+DPO, and/or any other RLHF techniques, while always keeping a data-first app…☆169Updated 2 years ago
- ☆158Updated 2 years ago
- FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text)☆246Updated 2 years ago
- Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow☆64Updated 2 years ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- Let's create synthetic textbooks together :)☆75Updated last year
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆69Updated last month
- QLoRA: Efficient Finetuning of Quantized LLMs☆79Updated last year
- 1-Click is all you need.☆63Updated last year
- **ARCHIVED** Filesystem interface to 🤗 Hub☆58Updated 2 years ago
- Tune MPTs☆84Updated 2 years ago
- ☆125Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆47Updated last year
- Using short models to classify long texts☆21Updated 2 years ago
- Alpaca-lora for huggingface implementation using Deepspeed and FullyShardedDataParallel☆24Updated 2 years ago
- Implementation of stop sequencer for Huggingface Transformers☆16Updated 2 years ago