☆27Jan 8, 2024Updated 2 years ago
Alternatives and similar repositories for gpu_kernels
Users that are interested in gpu_kernels are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- GPTQ inference Triton kernel☆321May 18, 2023Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆11Dec 13, 2023Updated 2 years ago
- Efficient Finetuning for OpenAI GPT-OSS☆24Oct 2, 2025Updated 6 months ago
- TLLM_QMM strips the implementation of quantized kernels of Nvidia's TensorRT-LLM, removing NVInfer dependency and exposes ease of use Pyt…☆16Jul 5, 2024Updated last year
- Sirius, an efficient correction mechanism, which significantly boosts Contextual Sparsity models on reasoning tasks while maintaining its…☆21Sep 10, 2024Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆14Mar 10, 2024Updated 2 years ago
- Codes & examples for "CUDA - From Correctness to Performance"☆128Oct 24, 2024Updated last year
- ☆26Feb 17, 2025Updated last year
- Möbius Transformation for Fast Inner Product Search on Graph☆22Jun 3, 2021Updated 4 years ago
- flash attention tutorial written in python, triton, cuda, cutlass☆506Jan 20, 2026Updated 3 months ago
- Inference Llama 2 in one file of pure Cuda☆17Aug 20, 2023Updated 2 years ago
- Follow nginx log, and find out bad guys!☆24Mar 7, 2026Updated last month
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆324Mar 4, 2025Updated last year
- Javascript-powered Swype interface☆16Apr 15, 2013Updated 13 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- ☆150Jan 9, 2025Updated last year
- LLVM-Canon aims to transform LLVM modules into a canonical form by reordering and renaming instructions while preserving the same semanti…☆31Apr 30, 2024Updated 2 years ago
- LongProc: Benchmarking Long-Context Language Models on Long Procedural Generation☆33Feb 26, 2026Updated 2 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆428Mar 5, 2026Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,061Sep 4, 2024Updated last year
- vLLM plugin for RBLN NPU☆47Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inference☆486Apr 19, 2025Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆65Nov 8, 2024Updated last year
- Implementation of the X-armed Bandits algorithm, as detailed in the paper, "X-armed Bandits", Bubeck et al., 2011.☆11Jul 12, 2018Updated 7 years ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Official Code For Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM☆14Dec 27, 2023Updated 2 years ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,641Jul 12, 2024Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆273Oct 3, 2025Updated 6 months ago
- ☆57Nov 14, 2024Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆142Aug 26, 2024Updated last year
- ☆11Sep 4, 2022Updated 3 years ago
- Writing FLUX in Triton☆42Sep 22, 2024Updated last year
- Demo of fine-tuning QA models for answering FAQ of cloud providers documentation☆11Mar 7, 2023Updated 3 years ago
- Parallel Self-Adjusting Computation☆16Jul 5, 2021Updated 4 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆12Mar 13, 2023Updated 3 years ago
- Transformers components but in Triton☆34May 9, 2025Updated 11 months ago
- LLM Inference with Microscaling Format☆34Nov 12, 2024Updated last year
- [WIP] Better (FP8) attention for Hopper☆33Feb 24, 2025Updated last year
- A probabilistic graphical model for COVID-19 infection spread through a population based on mutual contacts between pairs of individuals …☆13Oct 5, 2020Updated 5 years ago
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 8 months ago
- Sample examples of how to call collective operation functions on multi-GPU environments. A simple example of using broadcast, reduce, all…☆35Aug 28, 2023Updated 2 years ago