Motsepe-Jr / AI-research-papers-pseudo-code
This is a repo covers ai research papers pseudocodes
☆12Updated last year
Alternatives and similar repositories for AI-research-papers-pseudo-code:
Users that are interested in AI-research-papers-pseudo-code are comparing it to the libraries listed below
- Mixed precision training from scratch with Tensors and CUDA☆21Updated 8 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆66Updated 9 months ago
- ☆17Updated last year
- Simple implementation of Speculative Sampling in NumPy for GPT-2.☆90Updated last year
- ☆83Updated 7 months ago
- ☆114Updated 10 months ago
- ring-attention experiments☆116Updated 3 months ago
- ☆94Updated last year
- Explorations into some recent techniques surrounding speculative decoding☆229Updated 3 weeks ago
- ☆135Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- A toolkit for scaling law research ⚖☆43Updated last month
- ☆37Updated 9 months ago
- Easy and Efficient Quantization for Transformers☆191Updated last month
- PyTorch building blocks for OLMo☆47Updated this week
- ☆124Updated 11 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆78Updated 2 years ago
- Inference code for LLaMA models in JAX☆114Updated 7 months ago
- ☆140Updated last year
- Learn CUDA with PyTorch☆14Updated 2 months ago
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 7 months ago
- ☆34Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 4 months ago
- Triton Implementation of HyperAttention Algorithm☆46Updated last year
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆55Updated 2 months ago
- The Efficiency Spectrum of LLM☆52Updated last year
- Minimal but scalable implementation of large language models in JAX☆28Updated 2 months ago
- RL algorithm: Advantage induced policy alignment☆62Updated last year
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆121Updated 9 months ago
- ☆75Updated 6 months ago