Sumandora / remove-refusals-with-transformersLinks
Implements harmful/harmless refusal removal using pure HF Transformers
☆1,352Updated 2 weeks ago
Alternatives and similar repositories for remove-refusals-with-transformers
Users that are interested in remove-refusals-with-transformers are comparing it to the libraries listed below
Sorting:
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆539Updated last year
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,097Updated this week
- Make abliterated models with transformers, easy and fast☆101Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,387Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆588Updated last week
- Large-scale LLM inference engine☆1,603Updated 2 weeks ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆338Updated this week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,025Updated this week
- Simple go utility to download HuggingFace Models and Datasets☆776Updated 3 months ago
- GGUF Quantization support for native ComfyUI models☆2,940Updated last week
- LLM Frontend in a single html file☆670Updated 3 weeks ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆753Updated this week
- run DeepSeek-R1 GGUFs on KTransformers☆258Updated 9 months ago
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆155Updated 3 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆219Updated 3 months ago
- Dolphin System Messages☆369Updated 9 months ago
- Evaling and unaligning Chinese LLM censorship☆70Updated 7 months ago
- Autonomously train research-agent LLMs on custom data using reinforcement learning and self-verification.☆671Updated 8 months ago
- LM inference server implementation based on *.cpp.☆293Updated 2 weeks ago
- Create Custom LLMs☆1,780Updated last month
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆924Updated this week
- MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.☆1,910Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs (Windows build & kernels)☆253Updated 2 weeks ago
- automatically quant GGUF models☆219Updated last month
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,379Updated 3 months ago
- This repo contains the source code for RULER: What’s the Real Context Size of Your Long-Context Language Models?☆1,382Updated 3 weeks ago
- Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?☆1,848Updated last year
- The all-in-one RWKV runtime box with embed, RAG, AI agents, and more.☆589Updated last month
- Web UI for ExLlamaV2☆514Updated 10 months ago
- An AI-powered interactive avatar engine using Live2D, LLM, ASR, TTS, and RVC. Ideal for VTubing, streaming, and virtual assistant applica…☆951Updated last month