intel / intel-xai-toolsLinks
Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, feature-set, and model's algorithms are exposed.
☆39Updated last month
Alternatives and similar repositories for intel-xai-tools
Users that are interested in intel-xai-tools are comparing it to the libraries listed below
Sorting:
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated 3 months ago
- oneCCL Bindings for Pytorch*☆97Updated 2 months ago
- ☆47Updated last month
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆166Updated last month
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆13Updated last month
- Setup and Installation Instructions for Habana binaries, docker image creation☆25Updated last month
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆188Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆474Updated this week
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆305Updated last month
- Libraries and tools to support Transfer Learning☆19Updated 2 months ago
- Issues related to MLPerf™ training policies, including rules and suggested changes☆95Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆129Updated 2 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆298Updated this week
- ☆48Updated this week
- Inference Model Manager for Kubernetes☆46Updated 6 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76Updated this week
- Issues related to MLPerf™ Inference policies, including rules and suggested changes☆62Updated last week
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated this week
- Easily benchmark PyTorch model FLOPs, latency, throughput, allocated gpu memory and energy consumption☆103Updated last year
- Intel® AI Reference Models: contains Intel optimizations for running deep learning workloads on Intel® Xeon® Scalable processors and Inte…☆716Updated last week
- Intel® End-to-End AI Optimization Kit☆32Updated 11 months ago
- Benchmarks to capture important workloads.☆31Updated 5 months ago
- MLPerf™ logging library☆36Updated 2 months ago
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆61Updated last week
- An Awesome list of oneAPI projects☆146Updated 6 months ago
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆47Updated this week
- Computation using data flow graphs for scalable machine learning☆67Updated this week
- ☆39Updated this week