awslabs / nki-autotuneLinks
☆15Updated this week
Alternatives and similar repositories for nki-autotune
Users that are interested in nki-autotune are comparing it to the libraries listed below
Sorting:
- ☆57Updated last week
- ☆63Updated 3 weeks ago
- Project showing how to develop NKI kernels for Llama 3.2 1B inference☆20Updated 6 months ago
- Powering AWS purpose-built machine learning chips. Blazing fast and cost effective, natively integrated into PyTorch and TensorFlow and i…☆562Updated last week
- ☆25Updated 2 months ago
- Example code for AWS Neuron SDK developers building inference and training applications☆151Updated last week
- Collection of best practices, reference architectures, model training examples and utilities to train large models on AWS.☆380Updated this week
- ☆39Updated 11 months ago
- ☆111Updated 11 months ago
- ☆14Updated last year
- A CLI tool that helps manage training jobs on the SageMaker HyperPod clusters orchestrated by Amazon EKS☆33Updated this week
- EFA/NCCL base AMI build Packer and CodeBuild/Pipeline files. Also base Docker build files to enable EFA/NCCL in containers☆43Updated 2 years ago
- ☆12Updated 6 months ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆239Updated this week
- A schedule language for large model training☆151Updated 3 months ago
- Training and inference on AWS Trainium and Inferentia chips.☆252Updated this week
- ☆185Updated last year
- This is a plugin which lets EC2 developers use libfabric as network provider while running NCCL applications.☆201Updated this week
- Pipeline Parallelism for PyTorch☆783Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated this week
- ☆14Updated last year
- ☆129Updated 3 weeks ago
- ☆23Updated 3 weeks ago
- Microsoft Collective Communication Library☆377Updated 2 years ago
- ☆81Updated 7 months ago
- ☆13Updated 2 months ago
- ☆57Updated last week
- Best practices for training DeepSeek, Mixtral, Qwen and other MoE models using Megatron Core.☆137Updated last month
- Scripts to customize AWS ParallelCluster☆27Updated 3 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆708Updated this week