openvinotoolkit / openvino_tokenizersLinks
OpenVINO Tokenizers extension
☆44Updated last week
Alternatives and similar repositories for openvino_tokenizers
Users that are interested in openvino_tokenizers are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆403Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆522Updated this week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆56Updated last week
- A curated list of OpenVINO based AI projects☆176Updated 5 months ago
- Repository for OpenVINO's extra modules☆155Updated 2 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆222Updated 6 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆414Updated this week
- Use safetensors with ONNX 🤗☆78Updated 2 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆431Updated last week
- ONNX Runtime: cross-platform, high performance scoring engine for ML models☆75Updated last week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆34Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆203Updated this week
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆68Updated 3 weeks ago
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 9 months ago
- Intel® AI Assistant Builder☆137Updated last week
- ☆28Updated 3 weeks ago
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆61Updated 9 months ago
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆663Updated last week
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆39Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- Common utilities for ONNX converters☆289Updated last week
- AMD related optimizations for transformer models☆96Updated 2 months ago
- A Toolkit to Help Optimize Onnx Model☆280Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 5 months ago
- OpenVINO Intel NPU Compiler☆75Updated last week
- ☆131Updated last week
- ☆67Updated last week
- A scalable inference server for models optimized with OpenVINO™☆805Updated this week
- The Triton backend for the ONNX Runtime.☆170Updated 2 weeks ago
- ☆276Updated this week