predibase / loraxLinks
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
☆3,533Updated 5 months ago
Alternatives and similar repositories for lorax
Users that are interested in lorax are comparing it to the libraries listed below
Sorting:
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆2,932Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,726Updated this week
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,868Updated last year
- Tools for merging pretrained large language models.☆6,447Updated 2 weeks ago
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,952Updated 2 months ago
- Minimalistic large language model 3D-parallelism training☆2,323Updated 2 months ago
- Efficient Retrieval Augmentation and Generation Framework☆1,744Updated 10 months ago
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,549Updated this week
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,488Updated last week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,126Updated this week
- A lightweight, low-dependency, unified API to use all common reranking and cross-encoder models.☆1,572Updated 5 months ago
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,761Updated 6 months ago
- ☆3,038Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,238Updated this week
- Large-scale LLM inference engine☆1,591Updated this week
- A framework for serving and evaluating LLM routers - save LLM costs without compromising quality☆4,411Updated last year
- Fast State-of-the-Art Static Embeddings☆1,900Updated this week
- Optimizing inference proxy for LLMs☆3,106Updated last week
- PyTorch native post-training library☆5,595Updated this week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,529Updated 9 months ago
- Go ahead and axolotl questions☆10,798Updated this week
- AdalFlow: The library to build & auto-optimize LLM applications.☆3,873Updated last month
- Bringing BERT into modernity via both architecture changes and scaling☆1,563Updated 4 months ago
- Create Custom LLMs☆1,772Updated last week
- Training LLMs with QLoRA + FSDP☆1,529Updated last year
- DataComp for Language Models☆1,386Updated 2 months ago
- Everything about the SmolLM and SmolVLM family of models☆3,408Updated 2 months ago
- LLMPerf is a library for validating and benchmarking LLMs☆1,041Updated 11 months ago
- Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization☆1,380Updated 11 months ago
- AllenAI's post-training codebase☆3,294Updated this week