Sumandora / remove-refusals-with-transformersLinks
Implements harmful/harmless refusal removal using pure HF Transformers
☆1,275Updated last year
Alternatives and similar repositories for remove-refusals-with-transformers
Users that are interested in remove-refusals-with-transformers are comparing it to the libraries listed below
Sorting:
- Simple Python library/structure to ablate features in LLMs which are supported by TransformerLens☆530Updated last year
- llama.cpp fork with additional SOTA quants and improved performance☆1,329Updated this week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,090Updated this week
- Make abliterated models with transformers, easy and fast☆92Updated 7 months ago
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆327Updated last month
- Large-scale LLM inference engine☆1,591Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆571Updated last week
- run DeepSeek-R1 GGUFs on KTransformers☆255Updated 8 months ago
- LLM Frontend in a single html file☆663Updated last week
- Autonomously train research-agent LLMs on custom data using reinforcement learning and self-verification.☆671Updated 7 months ago
- LM inference server implementation based on *.cpp.☆290Updated 3 months ago
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆216Updated 3 months ago
- Evaling and unaligning Chinese LLM censorship☆67Updated 6 months ago
- Create Custom LLMs☆1,772Updated last week
- A proxy server for multiple ollama instances with Key security☆527Updated last week
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆147Updated 2 months ago
- Dolphin System Messages☆363Updated 9 months ago
- GGUF Quantization support for native ComfyUI models☆2,770Updated last week
- Web UI for ExLlamaV2☆514Updated 9 months ago
- Simple go utility to download HuggingFace Models and Datasets☆759Updated 2 months ago
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,899Updated this week
- Code release for Best-of-N Jailbreaking☆546Updated 9 months ago
- Pure C++ implementation of several models for real-time chatting on your computer (CPU & GPU)☆746Updated this week
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆886Updated this week
- automatically quant GGUF models☆214Updated 3 weeks ago
- Launcher scripts for SillyTavern and ST-Extras.☆410Updated last month
- Fast and memory-efficient exact attention☆776Updated 3 months ago
- The all-in-one RWKV runtime box with embed, RAG, AI agents, and more.☆587Updated 3 weeks ago
- LM Studio Apple MLX engine☆823Updated last week
- An Open Large Reasoning Model for Real-World Solutions☆1,527Updated 5 months ago