intel / polite-guardLinks
Source code for Intel's Polite Guard NLP project
☆37Updated 3 weeks ago
Alternatives and similar repositories for polite-guard
Users that are interested in polite-guard are comparing it to the libraries listed below
Sorting:
- ☆62Updated last year
- Fully Open Language Models with Stellar Performance☆247Updated 3 weeks ago
- General purpose GPU compute framework built on Vulkan to support 1000s of cross vendor graphics cards (AMD, Qualcomm, NVIDIA & friends). …☆52Updated 6 months ago
- llama.cpp fork used by GPT4All☆56Updated 6 months ago
- A minimalistic C++ Jinja templating engine for LLM chat templates☆170Updated 3 weeks ago
- Online compiler for HIP and NVIDIA® CUDA® code to WebGPU☆191Updated 7 months ago
- LocalScore is an open benchmark which helps you understand how well your computer can handle local AI tasks.☆55Updated 2 months ago
- A fork of OpenBLAS with Armv8-A SVE (Scalable Vector Extension) support☆17Updated 5 years ago
- GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.☆58Updated last year
- Editor with LLM generation tree exploration☆73Updated 6 months ago
- Lightweight Llama 3 8B Inference Engine in CUDA C☆48Updated 5 months ago
- Tensor library for machine learning☆17Updated 2 years ago
- Thin wrapper around GGML to make life easier☆40Updated 2 months ago
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated last year
- ☆54Updated last year
- The application performs real-time inference on audio from an ALSA capture device☆34Updated 2 months ago
- A JPEG Image Compression Service using Part Homomorphic Encryption.☆31Updated 5 months ago
- NASA Data Acquisition System (NDAS) is an integrated suite of applications that manage the complete data acquisition lifecycle—from acqui…☆94Updated 2 weeks ago
- tabled asymmetric numeral system☆37Updated last year
- Lightweight C inference for Qwen3 GGUF with the smallest (0.6B) at the fullest (FP32)☆16Updated 2 weeks ago
- iterate quickly with llama.cpp hot reloading. use the llama.cpp bindings with bun.sh☆51Updated last year
- This is the modified version of llama2.c LLM inference app ported to run on 32-bit capable DOS machines.☆22Updated 3 months ago
- Light WebUI for lm.rs☆24Updated 10 months ago
- A Mojo implementation of the Tiny Stable Diffusion model☆51Updated last year
- Transformer GPU VRAM estimator☆66Updated last year
- Kolosal AI is an OpenSource and Lightweight alternative to LM Studio to run LLMs 100% offline on your device.☆295Updated 3 months ago
- This repository contains the official authors implementation associated with the paper "TVMC: Time-Varying Mesh Compression Using Volume-…☆31Updated 3 months ago
- Granite 3.1 Language Models☆117Updated 2 months ago
- asynchronous/distributed speculative evaluation for llama3☆39Updated last year
- Phi4 Multimodal Instruct - OpenAI endpoint and Docker Image for self-hosting☆39Updated 5 months ago