CisMine / Setup-as-Cuda-programmers
Setup Cuda
☆15Updated 3 months ago
Related projects: ⓘ
- NVIDIA tools guide☆60Updated last month
- CUDA Learning guide☆203Updated 3 months ago
- Learning about CUDA by writing PTX code.☆28Updated 6 months ago
- Solve puzzles. Learn CUDA.☆53Updated 9 months ago
- Personal solutions to the Triton Puzzles☆11Updated 2 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆88Updated 11 months ago
- Implement Neural Networks in Cuda from Scratch☆18Updated 4 months ago
- Neural network from scratch in CUDA/C++☆65Updated 11 months ago
- ☆18Updated 4 years ago
- Learn CUDA with PyTorch☆11Updated last month
- ☆124Updated 7 months ago
- Custom kernels in Triton language for accelerating LLMs☆14Updated 5 months ago
- ML/DL Math and Method notes☆56Updated 9 months ago
- PyTorch half precision gemm lib w/ fused optional bias + optional relu/gelu☆25Updated 3 weeks ago
- ☆82Updated 6 months ago
- ☆124Updated last week
- ☆21Updated last week
- Mixed precision training from scratch with Tensors and CUDA☆18Updated 4 months ago
- Two implementations of ZeRO-1 optimizer sharding in JAX☆12Updated last year
- Cataloging released Triton kernels.☆111Updated 3 weeks ago
- Learning Compiler Pass Orders using Coreset and Normalized Value Prediction. (ICML 2023)☆17Updated last year
- ring-attention experiments☆89Updated 5 months ago
- This is a port of Mistral-7B model in JAX☆29Updated 2 months ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆34Updated 2 months ago
- Experiment of using Tangent to autodiff triton☆66Updated 7 months ago
- A set of hands-on tutorials for CUDA programming☆181Updated 5 months ago
- ☆22Updated last week
- ☆33Updated 3 months ago
- A user-friendly tool chain that enables the seamless execution of ONNX models using JAX as the backend.☆94Updated this week
- Introductory lecture on Pytorch☆15Updated 2 years ago