sasha0552 / pascal-pkgs-ciLinks
The main repository for building Pascal-compatible versions of ML applications and libraries.
☆158Updated 4 months ago
Alternatives and similar repositories for pascal-pkgs-ci
Users that are interested in pascal-pkgs-ci are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆294Updated last month
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆352Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆221Updated 4 months ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆609Updated this week
- run DeepSeek-R1 GGUFs on KTransformers☆259Updated 9 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,104Updated last week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆758Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,399Updated last week
- Self-hosted huggingface mirror service. 自建huggingface镜像服务。☆211Updated 5 months ago
- CI scripts designed to build a Pascal-compatible version of vLLM.☆12Updated last year
- ☆241Updated 3 months ago
- Open Source Text Embedding Models with OpenAI Compatible API☆164Updated last year
- NVIDIA Linux open GPU with P2P support☆101Updated 3 weeks ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆943Updated this week
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆267Updated 9 months ago
- ☆109Updated 4 months ago
- Docker compose to run vLLM on Windows☆112Updated last year
- automatically quant GGUF models☆218Updated last week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- Comparison of Language Model Inference Engines☆238Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆232Updated 6 months ago
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆580Updated last month
- A text-to-speech and speech-to-text server compatible with the OpenAI API, supporting Whisper, FunASR, Bark, and CosyVoice backends.☆186Updated last week
- A pipeline parallel training script for LLMs.☆164Updated 8 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)☆262Updated last month
- A fork of vLLM enabling Pascal architecture GPUs☆30Updated 10 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆267Updated this week
- Sparse Inferencing for transformer based LLMs☆215Updated 4 months ago
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆165Updated 2 weeks ago