HabanaAI / Model-References
Reference models for Intel(R) Gaudi(R) AI Accelerator
☆159Updated last week
Alternatives and similar repositories for Model-References:
Users that are interested in Model-References are comparing it to the libraries listed below
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆166Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆58Updated last month
- oneCCL Bindings for Pytorch*☆87Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆50Updated this week
- Large Language Model Text Generation Inference on Habana Gaudi☆31Updated this week
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆186Updated last week
- ☆114Updated 10 months ago
- Issues related to MLPerf™ training policies, including rules and suggested changes☆94Updated 2 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆102Updated 2 months ago
- The Triton backend for the PyTorch TorchScript models.☆141Updated last week
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://dev…☆56Updated this week
- ☆58Updated 8 months ago
- Training material for IPU users: tutorials, feature examples, simple applications☆86Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆12Updated last month
- This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.☆37Updated 11 months ago
- OpenAI Triton backend for Intel® GPUs☆157Updated this week
- ☆97Updated 5 months ago
- MLPerf™ logging library☆32Updated 3 weeks ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆195Updated last year
- The Triton backend for the ONNX Runtime.☆136Updated this week
- Applied AI experiments and examples for PyTorch☆216Updated last week
- Benchmarks to capture important workloads.☆29Updated this week
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆128Updated last week
- Research and development for optimizing transformers☆125Updated 3 years ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆160Updated this week
- ROCm Communication Collectives Library (RCCL)☆291Updated this week
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆153Updated last month
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆204Updated 5 months ago
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆292Updated this week