graphcore / tutorials
Training material for IPU users: tutorials, feature examples, simple applications
☆86Updated last year
Alternatives and similar repositories for tutorials:
Users that are interested in tutorials are comparing it to the libraries listed below
- PyTorch interface for the IPU☆177Updated last year
- Example code and applications for machine learning on Graphcore IPUs☆320Updated 11 months ago
- Poplar Advanced Runtime for the IPU☆6Updated last year
- Poplar libraries☆117Updated last year
- TensorFlow for the IPU☆78Updated last year
- Research and development for optimizing transformers☆125Updated 4 years ago
- ☆59Updated 2 weeks ago
- Fast sparse deep learning on CPUs☆52Updated 2 years ago
- Graph algorithms for machine learning frameworks☆27Updated last year
- Torch Distributed Experimental☆115Updated 6 months ago
- ☆100Updated 5 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆67Updated 6 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 5 months ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆85Updated 11 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆104Updated 2 months ago
- ☆67Updated 3 months ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆130Updated 3 years ago
- ☆117Updated 11 months ago
- This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.☆38Updated 11 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 11 months ago
- Distributed preprocessing and data loading for language datasets☆39Updated 10 months ago
- oneCCL Bindings for Pytorch*☆89Updated last month
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆88Updated last year
- extensible collectives library in triton☆83Updated 4 months ago
- ☆157Updated last year
- ☆26Updated 3 years ago
- Benchmarks to capture important workloads.☆29Updated 3 weeks ago
- benchmarking some transformer deployments☆26Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆234Updated 3 months ago
- This repository contains the results and code for the MLPerf™ Training v0.7 benchmark.☆56Updated last year