bjj / exllamav2-openai-serverLinks
An OpenAI API compatible LLM inference server based on ExLlamaV2.
☆25Updated last year
Alternatives and similar repositories for exllamav2-openai-server
Users that are interested in exllamav2-openai-server are comparing it to the libraries listed below
Sorting:
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆42Updated last year
- GPT-2 small trained on phi-like data☆67Updated last year
- Experimental sampler to make LLMs more creative☆31Updated 2 years ago
- Let's create synthetic textbooks together :)☆75Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆177Updated last year
- ☆51Updated last year
- ☆26Updated 2 years ago
- ☆116Updated 10 months ago
- An unsupervised model merging algorithm for Transformers-based language models.☆106Updated last year
- ☆73Updated 2 years ago
- run ollama & gguf easily with a single command☆52Updated last year
- ☆49Updated last year
- A guidance compatibility layer for llama-cpp-python☆36Updated 2 years ago
- After my server ui improvements were successfully merged, consider this repo a playground for experimenting, tinkering and hacking around…☆53Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆65Updated last year
- an implementation of Self-Extend, to expand the context window via grouped attention☆118Updated last year
- ☆40Updated 2 years ago
- ☆23Updated last year
- Easily view and modify JSON datasets for large language models☆83Updated 5 months ago
- Glyphs, acting as collaboratively defined symbols linking related concepts, add a layer of multidimensional semantic richness to user-AI …☆52Updated 8 months ago
- Full finetuning of large language models without large memory requirements☆93Updated last month
- ☆67Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- entropix style sampling + GUI☆27Updated 11 months ago
- Lightweight continuous batching OpenAI compatibility using HuggingFace Transformers include T5 and Whisper.☆29Updated 7 months ago
- Train Llama Loras Easily☆30Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆63Updated 2 years ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆42Updated 4 months ago
- Transplants vocabulary between language models, enabling the creation of draft models for speculative decoding WITHOUT retraining.☆42Updated 2 weeks ago
- Ollama models of NousResearch/Hermes-2-Pro-Mistral-7B-GGUF☆31Updated last year