microsoft / Llama-2-Onnx
☆1,023Updated 10 months ago
Related projects ⓘ
Alternatives and complementary repositories for Llama-2-Onnx
- Serving multiple LoRA finetuned LLM as one☆986Updated 6 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,945Updated 7 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,758Updated 10 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,765Updated this week
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆677Updated 7 months ago
- ggml implementation of BERT☆466Updated 8 months ago
- ☆505Updated 3 weeks ago
- ☆527Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,904Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,534Updated last month
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,815Updated 9 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆702Updated this week
- Llama 2 Everywhere (L2E)☆1,512Updated last month
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,150Updated last month
- ☆860Updated 11 months ago
- The repository for the code of the UltraFastBERT paper☆514Updated 7 months ago
- C++ implementation for BLOOM☆811Updated last year
- MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs☆868Updated last year
- Scale LLM Engine public repository☆783Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,138Updated last month
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆708Updated 5 months ago
- C++ implementation for 💫StarCoder☆447Updated last year
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated 2 months ago
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆558Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,425Updated 3 months ago
- Quantized inference code for LLaMA models☆1,051Updated last year
- 4 bits quantization of LLaMA using GPTQ☆2,999Updated 4 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,760Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,314Updated 4 months ago