huseinzol05 / transformers-continuous-batchingLinks
Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.
☆29Updated 10 months ago
Alternatives and similar repositories for transformers-continuous-batching
Users that are interested in transformers-continuous-batching are comparing it to the libraries listed below
Sorting:
- ☆51Updated last year
- Easy to use, High Performant Knowledge Distillation for LLMs☆97Updated 9 months ago
- entropix style sampling + GUI☆27Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- AnyModal is a Flexible Multimodal Language Model Framework for PyTorch☆103Updated last year
- The Benefits of a Concise Chain of Thought on Problem Solving in Large Language Models☆24Updated last year
- BUD-E (Buddy) is an open-source voice assistant framework that facilitates seamless interaction with AI models and APIs, enabling the cre…☆22Updated last year
- ☆63Updated 7 months ago
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆69Updated 2 months ago
- Tcurtsni: Reverse Instruction Chat, ever wonder what your LLM wants to ask you?☆23Updated last year
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆49Updated 3 months ago
- Yet another frontend for LLM, written using .NET and WinUI 3☆10Updated 4 months ago
- ☆109Updated 5 months ago
- Optimizing Causal LMs through GRPO with weighted reward functions and automated hyperparameter tuning using Optuna☆59Updated 3 months ago
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Updated last year
- Modified Beam Search with periodical restart☆12Updated last year
- OpenPipe Reinforcement Learning Experiments☆32Updated 10 months ago
- ☆17Updated last year
- A quick and optimized solution to manage llama based gguf quantized models, download gguf files, retreive messege formatting, add more mo…☆12Updated 2 years ago
- [ICLR'25] ApolloMoE: Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts☆52Updated last year
- ☆32Updated last year
- Nexusflow function call, tool use, and agent benchmarks.☆30Updated last year
- ☆24Updated last year
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆112Updated 8 months ago
- Simple examples using Argilla tools to build AI☆57Updated last year
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆47Updated last year
- ☆15Updated last month
- ☆68Updated last year
- ☆119Updated last year
- A pipeline parallel training script for LLMs.☆166Updated 9 months ago