sasha0552 / pascal-pkgs-ciLinks
The main repository for building Pascal-compatible versions of ML applications and libraries.
☆90Updated 2 weeks ago
Alternatives and similar repositories for pascal-pkgs-ci
Users that are interested in pascal-pkgs-ci are comparing it to the libraries listed below
Sorting:
- A fork of vLLM enabling Pascal architecture GPUs☆28Updated 3 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆173Updated this week
- CI scripts designed to build a Pascal-compatible version of vLLM.☆12Updated 9 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆519Updated this week
- automatically quant GGUF models☆181Updated this week
- LM inference server implementation based on *.cpp.☆203Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆385Updated this week
- Open Source Text Embedding Models with OpenAI Compatible API☆153Updated 10 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆969Updated this week
- GPU Power and Performance Manager☆58Updated 7 months ago
- Lightweight Inference server for OpenVINO☆176Updated last week
- Implements harmful/harmless refusal removal using pure HF Transformers☆841Updated 11 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆255Updated 3 months ago
- Make abliterated models with transformers, easy and fast☆71Updated last month
- run DeepSeek-R1 GGUFs on KTransformers☆231Updated 3 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆86Updated last month
- A open webui function for better R1 experience☆78Updated 2 months ago
- ☆71Updated last week
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆118Updated this week
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆405Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆130Updated 11 months ago
- ☆35Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆65Updated this week
- A fast batching API to serve LLM models☆181Updated last year
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆848Updated this week
- Download models from the Ollama library, without Ollama☆84Updated 6 months ago
- ☆88Updated 2 months ago
- Multi AMD GPU Setup for AI Development on Ubuntu with ROCM☆32Updated 2 months ago
- Self-hosted huggingface mirror service. 自建huggingface镜像服务。☆169Updated 3 weeks ago
- LLM inference in C/C++☆21Updated 2 months ago