flin3500 / Cuda-Google-ColabLinks
The cuda code is mainly for nvidia hardware device. This repo will show how to run cuda c or cuda cpp code on the google colab platform for free.
β25Updated 2 years ago
Alternatives and similar repositories for Cuda-Google-Colab
Users that are interested in Cuda-Google-Colab are comparing it to the libraries listed below
Sorting:
- π¬ Understand Docker step by step. A tutorial repo for beginners π₯β18Updated 5 years ago
- Create SSH tunel to a running colab notebookβ69Updated 4 years ago
- A plugin for Jupyter Notebook to run CUDA C/C++ codeβ248Updated last year
- Simple problems implemented in CUDA Cβ27Updated 6 months ago
- A c/c++ implementation of micrograd: a tiny autograd engine with neural net on top.β73Updated 2 years ago
- Neural network from scratch in CUDA/C++β87Updated last month
- Inference Vision Transformer (ViT) in plain C/C++ with ggmlβ295Updated last year
- A simplified LLAMA implementation for training and inference tasks.β33Updated 3 months ago
- Simple Byte pair Encoding mechanism used for tokenization process . written purely in Cβ139Updated 11 months ago
- [WIP] A π₯ interface for running code in the cloudβ85Updated 2 years ago
- 3X speedup over Appleβs TensorFlow plugin by using Apache TVM on M1β138Updated 3 years ago
- β10Updated 2 years ago
- A really tiny autograd engineβ96Updated 5 months ago
- β73Updated last year
- β17Updated last year
- Python bindings for ggmlβ146Updated last year
- CUDA Guideβ74Updated last year
- Scripts for text classification with llama and bertβ28Updated 3 months ago
- ctypes wrappers for HIP, CUDA, and OpenCLβ130Updated last year
- Learning about CUDA by writing PTX code.β145Updated last year
- Can RL solve simple problems?β54Updated last year
- Drop in replacement for OpenAI, but with Open models.β153Updated 2 years ago
- This repository contains an overview of important follow-up works based on the original Vision Transformer (ViT) by Google.β185Updated 3 years ago
- β19Updated 2 years ago
- β81Updated last week
- Learn CUDA with PyTorchβ95Updated last month
- Notes on "Programming Massively Parallel Processors" by Hwu, Kirk, and Hajj (4th ed.)β53Updated last year
- Easy and lightning fast training of π€ Transformers on Habana Gaudi processor (HPU)β199Updated this week
- Blazing fast training of π€ Transformers on Graphcore IPUsβ85Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog poβ¦β91Updated 2 years ago