marella / ctransformers
Python bindings for the Transformer models implemented in C/C++ using GGML library.
☆1,857Updated last year
Alternatives and similar repositories for ctransformers:
Users that are interested in ctransformers are comparing it to the libraries listed below
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,858Updated last year
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,115Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,808Updated this week
- 4 bits quantization of LLaMA using GPTQ☆3,050Updated 9 months ago
- Customizable implementation of the self-instruct paper.☆1,043Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆2,469Updated 8 months ago
- Alpaca dataset from Stanford, cleaned and curated☆1,546Updated 2 years ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,092Updated last week
- Tune any FALCON in 4-bit☆466Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,077Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆720Updated 10 months ago
- Large-scale LLM inference engine☆1,379Updated last week
- Tools for merging pretrained large language models.☆5,556Updated last week
- Chat language model that can use tools and interpret the results☆1,538Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,918Updated last week
- Simple UI for LLM Model Finetuning☆2,062Updated last year
- [ACL 2023] One Embedder, Any Task: Instruction-Finetuned Text Embeddings☆1,931Updated 3 months ago
- Go ahead and axolotl questions☆9,075Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,002Updated 3 weeks ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,816Updated last year
- Enforce the output format (JSON Schema, Regex etc) of a language model☆1,770Updated last month
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,168Updated 6 months ago
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆2,942Updated last month
- RayLLM - LLMs on Ray (Archived). Read README for more info.☆1,263Updated last month
- YaRN: Efficient Context Window Extension of Large Language Models☆1,463Updated last year
- Aligning pretrained language models with instruction data generated by themselves.☆4,340Updated 2 years ago
- ☆535Updated last year
- A blazing fast inference solution for text embeddings models☆3,414Updated last week
- Fine-tune mistral-7B on 3090s, a100s, h100s☆709Updated last year
- The hub for EleutherAI's work on interpretability and learning dynamics☆2,456Updated last month