intel / intel-xai-tools
Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
☆37Updated last week
Alternatives and similar repositories for intel-xai-tools:
Users that are interested in intel-xai-tools are comparing it to the libraries listed below
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆59Updated this week
- oneCCL Bindings for Pytorch*☆91Updated this week
- Computation using data flow graphs for scalable machine learning☆67Updated this week
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆162Updated last month
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated 3 weeks ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆454Updated this week
- Intel® Tensor Processing Primitives extension for Pytorch*☆12Updated last week
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆704Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆249Updated this week
- Issues related to MLPerf™ training policies, including rules and suggested changes☆94Updated 3 weeks ago
- Issues related to MLPerf™ Inference policies, including rules and suggested changes☆60Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆62Updated this week
- 🔮 Execution time predictions for deep neural network training iterations across different GPUs.☆60Updated 2 years ago
- Large Language Model Text Generation Inference on Habana Gaudi☆32Updated last week
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆181Updated this week
- ☆16Updated this week
- Intel® Extension for TensorFlow*☆336Updated 2 weeks ago
- Intel® End-to-End AI Optimization Kit☆31Updated 8 months ago
- Inference Model Manager for Kubernetes☆46Updated 5 years ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆329Updated this week
- OpenAI Triton backend for Intel® GPUs☆172Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆13Updated last month
- oneAPI Collective Communications Library (oneCCL)☆227Updated this week
- Repository for OpenVINO's extra modules☆118Updated this week
- This repository contains the results and code for the MLPerf™ Training v1.1 benchmark.☆23Updated last year
- Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inferen…☆61Updated last week
- OpenVINO™ integration with TensorFlow☆179Updated 9 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆123Updated last week
- This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.☆38Updated last year
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆132Updated this week