graphcore / tutorials
Training material for IPU users: tutorials, feature examples, simple applications
☆86Updated last year
Alternatives and similar repositories for tutorials:
Users that are interested in tutorials are comparing it to the libraries listed below
- PyTorch interface for the IPU☆177Updated last year
- Example code and applications for machine learning on Graphcore IPUs☆319Updated 10 months ago
- Research and development for optimizing transformers☆125Updated 3 years ago
- TensorFlow for the IPU☆78Updated last year
- Poplar libraries☆116Updated last year
- Poplar Advanced Runtime for the IPU☆6Updated last year
- Fast sparse deep learning on CPUs☆51Updated 2 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆102Updated last month
- ☆96Updated 4 months ago
- Graph algorithms for machine learning frameworks☆27Updated last year
- ☆57Updated 7 months ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆84Updated 10 months ago
- ☆114Updated 10 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 10 months ago
- ☆157Updated last year
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- ☆26Updated 3 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆230Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆182Updated this week
- Benchmark code for the "Online normalizer calculation for softmax" paper☆62Updated 6 years ago
- Distributed preprocessing and data loading for language datasets☆39Updated 9 months ago
- System for automated integration of deep learning backends.☆48Updated 2 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆130Updated 2 years ago
- extensible collectives library in triton☆76Updated 3 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆165Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆47Updated this week
- ☆64Updated 2 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆93Updated 6 months ago
- Sparsity support for PyTorch☆33Updated last month