HabanaAI / Model-ReferencesLinks
Reference models for Intel(R) Gaudi(R) AI Accelerator
☆161Updated 2 weeks ago
Alternatives and similar repositories for Model-References
Users that are interested in Model-References are comparing it to the libraries listed below
Sorting:
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆186Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆75Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆61Updated 2 months ago
- oneCCL Bindings for Pytorch*☆97Updated last month
- Issues related to MLPerf ™ training policies, including rules and suggested changes☆95Updated last month
- This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.☆38Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆196Updated this week
- Distributed preprocessing and data loading for language datasets☆39Updated last year
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆157Updated 5 months ago
- MLPerf™ logging library☆36Updated last month
- ☆118Updated last year
- Large Language Model Text Generation Inference on Habana Gaudi☆33Updated 2 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆169Updated last week
- Research and development for optimizing transformers☆126Updated 4 years ago
- A library to analyze PyTorch traces.☆379Updated last week
- PArametrized Recommendation and Ai Model benchmark is a repository for development of numerous uBenchmarks as well as end to end nets for…☆140Updated this week
- This repository contains the results and code for the MLPerf™ Training v2.0 benchmark.☆28Updated last year
- ☆71Updated 2 months ago
- FTPipe and related pipeline model parallelism research.☆41Updated 2 years ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆13Updated 2 weeks ago
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆173Updated this week
- Applied AI experiments and examples for PyTorch☆271Updated this week
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆208Updated 9 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆115Updated 6 months ago
- ☆18Updated this week
- ☆250Updated 10 months ago
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 10 months ago
- ☆53Updated 8 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆211Updated last year