ksm26 / Efficiently-Serving-LLMsLinks
Learn the ins and outs of efficiently serving Large Language Models (LLMs). Dive into optimization techniques, including KV caching and Low Rank Adapters (LoRA), and gain hands-on experience with Predibase’s LoRAX framework inference server.
☆17Updated last year
Alternatives and similar repositories for Efficiently-Serving-LLMs
Users that are interested in Efficiently-Serving-LLMs are comparing it to the libraries listed below
Sorting:
- Fine-tune an LLM to perform batch inference and online serving.☆113Updated 6 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆78Updated last year
- A set of scripts and notebooks on LLM finetunning and dataset creation☆111Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- Complete implementation of Llama2 with/without KV cache & inference 🚀☆48Updated last year
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.☆33Updated 2 months ago
- Material for the series of seminars on Large Language Models☆34Updated last year
- Fine-tuning large language models (LLMs) is crucial for enhancing performance across domain-specific task applications. This comprehensiv…☆12Updated last year
- Seemless interface of using PyTOrch distributed with Jupyter notebooks☆56Updated 2 months ago
- ☆80Updated last year
- Build Enterprise RAG (Retriver Augmented Generation) Pipelines to tackle various Generative AI use cases with LLM's by simply plugging co…☆115Updated last year
- Codebase accompanying the Summary of a Haystack paper.☆79Updated last year
- experiments with inference on llama☆103Updated last year
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.☆69Updated last year
- Supervised instruction finetuning for LLM with HF trainer and Deepspeed☆36Updated 2 years ago
- Code Repository for Blog - How to Productionize Large Language Models (LLMs)☆12Updated last year
- Model, Code & Data for the EMNLP'23 paper "Making Large Language Models Better Data Creators"☆137Updated 2 years ago
- ☆146Updated last year
- Examples of using Evidently to evaluate, test and monitor ML models.☆43Updated 2 months ago
- Notes on Direct Preference Optimization☆23Updated last year
- Code for NeurIPS LLM Efficiency Challenge☆59Updated last year
- Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI m…☆224Updated 2 years ago
- Repository containing awesome resources regarding Hugging Face tooling.☆48Updated last year
- ☆98Updated 8 months ago
- ☆20Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆277Updated last year
- A RAG that can scale 🧑🏻💻☆11Updated last year
- ☆15Updated 2 years ago
- ☆43Updated last year
- Includes examples on how to evaluate LLMs☆23Updated last year