sasha0552 / pascal-pkgs-ciLinks
The main repository for building Pascal-compatible versions of ML applications and libraries.
☆128Updated last month
Alternatives and similar repositories for pascal-pkgs-ci
Users that are interested in pascal-pkgs-ci are comparing it to the libraries listed below
Sorting:
- LM inference server implementation based on *.cpp.☆273Updated last month
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆247Updated last week
- CI scripts designed to build a Pascal-compatible version of vLLM.☆12Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆499Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆205Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆1,181Updated this week
- Large-scale LLM inference engine☆1,552Updated this week
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆704Updated last week
- Lightweight Inference server for OpenVINO☆211Updated this week
- ☆182Updated this week
- OpenAI compatible API for TensorRT LLM triton backend☆214Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆163Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆260Updated 6 months ago
- InferX is a Inference Function as a Service Platform☆133Updated last week
- Self-hosted huggingface mirror service. 自建huggingface镜像服务。☆194Updated 2 months ago
- Comparison of Language Model Inference Engines☆229Updated 9 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆784Updated this week
- A fork of vLLM enabling Pascal architecture GPUs☆28Updated 7 months ago
- run DeepSeek-R1 GGUFs on KTransformers☆251Updated 6 months ago
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,050Updated 3 weeks ago
- DFloat11: Lossless LLM Compression for Efficient GPU Inference☆536Updated 3 weeks ago
- NVIDIA Linux open GPU with P2P support☆50Updated 3 weeks ago
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,530Updated last week
- Tool to download models from Huggingface Hub and convert them to GGML/GGUF for llama.cpp☆160Updated 4 months ago
- Sparse Inferencing for transformer based LLMs☆197Updated last month
- Enhancing Translation with RAG-Powered Large Language Models☆83Updated last month
- automatically quant GGUF models☆202Updated this week
- Code execution utilities for Open WebUI & Ollama☆296Updated 10 months ago
- ☆83Updated this week
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆154Updated 4 months ago