intel / ai
Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools from Intel.
☆34Updated 8 months ago
Alternatives and similar repositories for ai:
Users that are interested in ai are comparing it to the libraries listed below
- OpenVINO Tokenizers extension☆30Updated this week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆35Updated last week
- A curated list of OpenVINO based AI projects☆122Updated 2 months ago
- A repository of Dockerfiles, scripts, yaml files, Helm Charts, etc. used to build and scale the sample AI workflows with python, kubernet…☆11Updated last year
- Official repository of the Intel Certified Developer Program☆43Updated this week
- ☆12Updated 5 months ago
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆18Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆224Updated this week
- llama.cpp fork used by GPT4All☆52Updated last week
- Knowledge Base QA using RAG pipeline on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with IPEX-LL…☆16Updated 2 weeks ago
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆37Updated last week
- Pre-built components and code samples to help you build and deploy production-grade AI applications with the OpenVINO™ Toolkit from Intel☆128Updated this week
- Fork of llama.cpp, extended for GPT-NeoX, RWKV-v4, and Falcon models☆29Updated last year
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆173Updated this week
- ☆18Updated last month
- First token cutoff sampling inference example☆29Updated last year
- Tensor library for machine learning☆21Updated last year
- Docs, Snippets, Guides☆34Updated this week
- Horizon chart for CPU/GPU/Neural Engine utilization monitoring on Apple M1/M2 and nVidia GPUs on Linux☆25Updated 4 months ago
- AI Edge Quantizer: flexible post training quantization for LiteRT models.☆24Updated this week
- ☆105Updated 3 months ago
- Visualize expert firing frequencies across sentences in the Mixtral MoE model☆17Updated last year
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated this week
- Intel® Extension for TensorFlow*☆332Updated last month
- A collection of notebooks for the Hugging Face blog series (https://huggingface.co/blog).☆43Updated 6 months ago
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated 5 months ago
- ☆34Updated this week
- Data Wrangling, Linear Models & other misc. Inferential Statistics.☆14Updated 2 years ago
- LLM SDK for OnnxRuntime GenAI (OGA)☆90Updated this week