Motsepe-Jr / AI-research-papers-pseudo-code
This is a repo covers ai research papers pseudocodes
☆14Updated last year
Alternatives and similar repositories for AI-research-papers-pseudo-code:
Users that are interested in AI-research-papers-pseudo-code are comparing it to the libraries listed below
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆91Updated last year
- Cataloging released Triton kernels.☆176Updated last month
- Applied AI experiments and examples for PyTorch☆237Updated this week
- ring-attention experiments☆126Updated 4 months ago
- Mixed precision training from scratch with Tensors and CUDA☆21Updated 9 months ago
- Easy and Efficient Quantization for Transformers☆192Updated 3 weeks ago
- Explorations into some recent techniques surrounding speculative decoding☆245Updated 2 months ago
- Code for studying the super weight in LLM☆87Updated 3 months ago
- ☆186Updated last week
- Collection of kernels written in Triton language☆108Updated 2 weeks ago
- Prune transformer layers☆68Updated 9 months ago
- ☆116Updated 11 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆65Updated 6 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆70Updated 6 months ago
- ☆25Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆335Updated 6 months ago
- ☆100Updated 6 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆116Updated last year
- ☆17Updated last year
- ☆145Updated 3 weeks ago
- ☆125Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆221Updated 7 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆273Updated this week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆157Updated 7 months ago
- ☆146Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆188Updated this week
- ☆138Updated last year
- Fast low-bit matmul kernels in Triton☆250Updated last week
- ☆94Updated last year