Motsepe-Jr / AI-research-papers-pseudo-codeLinks
This is a repo covers ai research papers pseudocodes
☆17Updated 2 years ago
Alternatives and similar repositories for AI-research-papers-pseudo-code
Users that are interested in AI-research-papers-pseudo-code are comparing it to the libraries listed below
Sorting:
- a minimal cache manager for PagedAttention, on top of llama3.☆130Updated last year
- Easy and Efficient Quantization for Transformers☆202Updated 7 months ago
- A minimal implementation of vllm.☆66Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆99Updated 2 years ago
- Cataloging released Triton kernels.☆289Updated 4 months ago
- Applied AI experiments and examples for PyTorch☆314Updated 5 months ago
- ☆124Updated last year
- ring-attention experiments☆163Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆218Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆140Updated 7 months ago
- ☆27Updated 2 years ago
- Explorations into some recent techniques surrounding speculative decoding☆298Updated last year
- Collection of kernels written in Triton language☆175Updated 9 months ago
- ☆229Updated 2 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆335Updated last year
- Triton implementation of Flash Attention2.0☆47Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Updated last year
- Flash Attention in 300-500 lines of CUDA/C++☆36Updated 5 months ago
- ☆17Updated 2 years ago
- ☆178Updated last year
- ☆157Updated 2 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆93Updated last year
- Prune transformer layers☆74Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- Fast low-bit matmul kernels in Triton☆423Updated last month
- 📑 Dive into Big Model Training☆116Updated 3 years ago
- A bunch of kernels that might make stuff slower 😉☆75Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last month
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Latency and Memory Analysis of Transformer Models for Training and Inference☆478Updated 9 months ago