NolanoOrg / cformers
SoTA Transformers with C-backend for fast inference on your CPU.
☆311Updated last year
Alternatives and similar repositories for cformers:
Users that are interested in cformers are comparing it to the libraries listed below
- C++ implementation for BLOOM☆810Updated last year
- fastLLaMa: An experimental high-performance framework for running Decoder-only LLMs with 4-bit quantization in Python using a C/C++ backe…☆408Updated last year
- Landmark Attention: Random-Access Infinite Context Length for Transformers QLoRA☆123Updated last year
- ☆457Updated last year
- A torchless, c++ rwkv implementation using 8bit quantization, written in cuda/hip/vulkan for maximum compatibility and minimum dependenci…☆309Updated last year
- Falcon LLM ggml framework with CPU and GPU support☆246Updated last year
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆571Updated 8 months ago
- ☆536Updated last year
- Automated prompting and scoring framework to evaluate LLMs using updated human knowledge prompts☆111Updated last year
- ggml implementation of BERT☆483Updated last year
- ☆407Updated last year
- Framework agnostic python runtime for RWKV models☆145Updated last year
- Command-line script for inferencing from models such as MPT-7B-Chat☆101Updated last year
- LLaMa retrieval plugin script using OpenAI's retrieval plugin☆324Updated last year
- LLM-based code completion engine☆181Updated last month
- Tune any FALCON in 4-bit☆466Updated last year
- Extend the original llama.cpp repo to support redpajama model.☆117Updated 6 months ago
- Python bindings for ggml☆140Updated 6 months ago
- WebGPU LLM inference tuned by hand☆149Updated last year
- C++ implementation for 💫StarCoder☆452Updated last year
- Python bindings for llama.cpp☆199Updated last year
- Full finetuning of large language models without large memory requirements☆93Updated last year
- ☆412Updated last year
- CLIP inference in plain C/C++ with no extra dependencies☆485Updated 6 months ago
- Landmark Attention: Random-Access Infinite Context Length for Transformers☆422Updated last year
- Train llama with lora on one 4090 and merge weight of lora to work as stanford alpaca.☆50Updated last year
- Simple, hackable and fast implementation for training/finetuning medium-sized LLaMA-based models☆165Updated last week
- Embeddings focused small version of Llama NLP model☆103Updated last year
- batched loras☆338Updated last year
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆362Updated last year