ksm26 / Efficiently-Serving-LLMs
Learn the ins and outs of efficiently serving Large Language Models (LLMs). Dive into optimization techniques, including KV caching and Low Rank Adapters (LoRA), and gain hands-on experience with Predibase’s LoRAX framework inference server.
☆14Updated last year
Alternatives and similar repositories for Efficiently-Serving-LLMs
Users that are interested in Efficiently-Serving-LLMs are comparing it to the libraries listed below
Sorting:
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆34Updated last week
- Fine-tune an LLM to perform batch inference and online serving.☆110Updated last week
- ☆20Updated last year
- Fine-tuning large language models (LLMs) is crucial for enhancing performance across domain-specific task applications. This comprehensiv…☆12Updated 7 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 10 months ago
- Code Repository for Blog - How to Productionize Large Language Models (LLMs)☆11Updated last year
- ☆43Updated 3 months ago
- A set of scripts and notebooks on LLM finetunning and dataset creation☆109Updated 7 months ago
- ☆77Updated 11 months ago
- Low latency, High Accuracy, Custom Query routers for Humans and Agents. Built by Prithivi Da☆103Updated last month
- ☆15Updated last year
- Testing speed and accuracy of RAG with, and without Cross Encoder Reranker.☆48Updated last year
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆76Updated 6 months ago
- Build Agentic workflows with function calling using open LLMs☆26Updated last week
- Codebase accompanying the Summary of a Haystack paper.☆78Updated 7 months ago
- ☆28Updated 6 months ago
- ☆20Updated 3 years ago
- ☆143Updated 9 months ago
- Chunk your text using gpt4o-mini more accurately☆44Updated 9 months ago
- ☆48Updated 6 months ago
- Build Enterprise RAG (Retriver Augmented Generation) Pipelines to tackle various Generative AI use cases with LLM's by simply plugging co…☆109Updated 9 months ago
- ☆24Updated last year
- A RAG that can scale 🧑🏻💻☆11Updated 11 months ago
- ☆47Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆57Updated last year
- Running load tests on a FastAPI application using Locust☆15Updated last month
- Repository containing awesome resources regarding Hugging Face tooling.☆47Updated last year
- Simple examples using Argilla tools to build AI☆52Updated 5 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆61Updated last year
- Mistral + Haystack: build RAG pipelines that rock 🤘☆103Updated last year