microsoft / Llama-2-OnnxLinks
☆1,028Updated last year
Alternatives and similar repositories for Llama-2-Onnx
Users that are interested in Llama-2-Onnx are comparing it to the libraries listed below
Sorting:
- C++ implementation for BLOOM☆808Updated 2 years ago
- Python bindings for the Transformer models implemented in C/C++ using GGML library.☆1,878Updated last year
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,167Updated last year
- ggml implementation of BERT☆500Updated last year
- Finetuning Large Language Models on One Consumer GPU in 2 Bits☆733Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,122Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,555Updated 8 months ago
- ☆550Updated 11 months ago
- Scale LLM Engine public repository☆814Updated last week
- Salesforce open-source LLMs with 8k sequence length.☆722Updated 10 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆733Updated last year
- Run inference on MPT-30B using CPU☆576Updated 2 years ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,903Updated 2 years ago
- Llama 2 Everywhere (L2E)☆1,521Updated 3 months ago
- LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions☆823Updated 2 years ago
- Finetune llama2-70b and codellama on MacBook Air without quantization☆450Updated last year
- C++ implementation for 💫StarCoder☆457Updated 2 years ago
- The complete training code of the open-source high-performance Llama model, including the full process from pre-training to RLHF.☆52Updated 2 years ago
- MiniLLM is a minimal system for running modern LLMs on consumer-grade GPUs☆936Updated 2 years ago
- Inference Llama 2 in one file of pure 🔥☆2,118Updated 2 weeks ago
- Quantized inference code for LLaMA models☆1,047Updated 2 years ago
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆686Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆710Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,080Updated 5 months ago
- llama3.np is a pure NumPy implementation for Llama 3 model.☆992Updated 7 months ago
- 4 bits quantization of LLaMA using GPTQ☆3,081Updated last year
- Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)☆568Updated 2 years ago
- TinyChatEngine: On-Device LLM Inference Library☆929Updated last year
- Generative AI extensions for onnxruntime☆901Updated this week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,226Updated last year