NolanoOrg / cformersLinks
SoTA Transformers with C-backend for fast inference on your CPU.
☆308Updated last year
Alternatives and similar repositories for cformers
Users that are interested in cformers are comparing it to the libraries listed below
Sorting:
- C++ implementation for BLOOM☆809Updated 2 years ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆122Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated 2 years ago
- ☆534Updated last year
- Python bindings for llama.cpp☆197Updated 2 years ago
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆311Updated last year
- ☆457Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆245Updated last year
- LLM-based code completion engine☆189Updated 4 months ago
- C++ implementation for 💫StarCoder☆452Updated last year
- Tune any FALCON in 4-bit☆466Updated last year
- Python bindings for ggml☆140Updated 9 months ago
- Command-line script for inferencing from models such as MPT-7B-Chat☆100Updated last year
- Convenient wrapper for fine-tuning and inference of Large Language Models (LLMs) with several quantization techniques (GTPQ, bitsandbytes…☆147Updated last year
- ☆412Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆581Updated 11 months ago
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆109Updated last year
- ggml implementation of BERT☆491Updated last year
- An Autonomous LLM Agent that runs on Wizcoder-15B☆334Updated 7 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆365Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆421Updated last year
- ☆405Updated 2 years ago
- Tune MPTs☆84Updated last year
- Instruct-tuning LLaMA on consumer hardware☆65Updated 2 years ago
- GPTQ inference Triton kernel☆299Updated 2 years ago
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 8 months ago
- CLIP inference in plain C/C++ with no extra dependencies☆498Updated 9 months ago
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆323Updated 2 years ago
- ☆542Updated 5 months ago