naklecha / llm-inference-optimizations-explainedLinks
in this repository, i'm going to implement increasingly complex llm inference optimizations
☆68Updated 5 months ago
Alternatives and similar repositories for llm-inference-optimizations-explained
Users that are interested in llm-inference-optimizations-explained are comparing it to the libraries listed below
Sorting:
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 6 months ago
- rl from zero pretrain, can it be done? yes.☆277Updated 3 weeks ago
- ☆46Updated 6 months ago
- working implimention of deepseek MLA☆44Updated 9 months ago
- A really tiny autograd engine☆95Updated 5 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆296Updated 2 months ago
- Simple Transformer in Jax☆139Updated last year
- SIMD quantization kernels☆87Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆107Updated 7 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆193Updated 4 months ago
- Training-Ready RL Environments + Evals☆132Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 7 months ago
- A collection of lightweight interpretability scripts to understand how LLMs think☆61Updated this week
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated last year
- Exploring Applications of GRPO☆248Updated 2 months ago
- Compiling useful links, papers, benchmarks, ideas, etc.☆45Updated 7 months ago
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆276Updated 11 months ago
- Learning about CUDA by writing PTX code.☆144Updated last year
- Quantized LLM training in pure CUDA/C++.☆206Updated last week
- Learnings and programs related to CUDA☆420Updated 3 months ago
- A simple MLX implementation for pretraining LLMs on Apple Silicon.☆84Updated 2 months ago
- 👷 Build compute kernels☆163Updated last week
- Simple & Scalable Pretraining for Neural Architecture Research☆297Updated 2 months ago
- ☆68Updated 5 months ago
- The Automated LLM Speedrunning Benchmark measures how well LLM agents can reproduce previous innovations and discover new ones in languag…☆103Updated 2 weeks ago
- Collection of autoregressive model implementation☆86Updated 6 months ago
- ☆222Updated 3 weeks ago
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆103Updated 3 weeks ago
- coding CUDA everyday!☆64Updated 6 months ago
- ☆510Updated 2 months ago