openvinotoolkit / openvino_tokenizersLinks
OpenVINO Tokenizers extension
☆48Updated this week
Alternatives and similar repositories for openvino_tokenizers
Users that are interested in openvino_tokenizers are comparing it to the libraries listed below
Sorting:
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆428Updated this week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆60Updated 3 weeks ago
- Repository for OpenVINO's extra modules☆163Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- A curated list of OpenVINO based AI projects☆181Updated 7 months ago
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆42Updated this week
- ONNX Runtime: cross-platform, high performance scoring engine for ML models☆78Updated this week
- With OpenVINO Test Drive, users can run large language models (LLMs) and models trained by Intel Geti on their devices, including AI PCs …☆37Updated last month
- No-code CLI designed for accelerating ONNX workflows☆227Updated 7 months ago
- The framework to generate a Dockerfile, build, test, and deploy a docker image with OpenVINO™ toolkit.☆71Updated 2 weeks ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆441Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 10 months ago
- Use safetensors with ONNX 🤗☆87Updated this week
- OpenVINO Intel NPU Compiler☆81Updated last week
- Intel® AI Super Builder☆159Updated this week
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆67Updated 10 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆420Updated last week
- AMD related optimizations for transformer models☆97Updated 3 months ago
- Tools for easier OpenVINO development/debugging☆10Updated 6 months ago
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆202Updated this week
- cudnn_frontend provides a c++ wrapper for the cudnn backend API and samples on how to use it☆681Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆205Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆64Updated 7 months ago
- ☆137Updated this week
- A scalable inference server for models optimized with OpenVINO™☆823Updated this week
- oneAPI Specification source files☆211Updated 3 weeks ago
- Common utilities for ONNX converters☆294Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- Generative AI extensions for onnxruntime☆953Updated this week