Motsepe-Jr / AI-research-papers-pseudo-codeLinks
This is a repo covers ai research papers pseudocodes
☆17Updated 2 years ago
Alternatives and similar repositories for AI-research-papers-pseudo-code
Users that are interested in AI-research-papers-pseudo-code are comparing it to the libraries listed below
Sorting:
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆98Updated 2 years ago
- Explorations into some recent techniques surrounding speculative decoding☆288Updated 10 months ago
- ☆121Updated last year
- A minimal implementation of vllm.☆60Updated last year
- Applied AI experiments and examples for PyTorch☆301Updated 2 months ago
- Easy and Efficient Quantization for Transformers☆202Updated 4 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆125Updated last year
- Cataloging released Triton kernels.☆264Updated last month
- ☆17Updated 2 years ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆461Updated 6 months ago
- ☆225Updated 2 weeks ago
- ring-attention experiments☆155Updated last year
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆216Updated this week
- 📑 Dive into Big Model Training☆114Updated 2 years ago
- ☆156Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated last year
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆147Updated last year
- Fast low-bit matmul kernels in Triton☆388Updated last week
- ☆121Updated last year
- A bunch of kernels that might make stuff slower 😉☆64Updated this week
- Triton-based implementation of Sparse Mixture of Experts.☆247Updated last month
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆270Updated 3 months ago
- ☆149Updated 2 years ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆323Updated last year
- ☆174Updated last year
- ☆246Updated this week
- The official implementation of the EMNLP 2023 paper LLM-FP4☆217Updated last year
- Prune transformer layers☆69Updated last year
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆389Updated last year
- Collection of kernels written in Triton language☆159Updated 7 months ago