naklecha / llm-inference-optimizations-explained
in this repository, i'm going to implement increasingly complex llm inference optimizations
☆20Updated this week
Alternatives and similar repositories for llm-inference-optimizations-explained:
Users that are interested in llm-inference-optimizations-explained are comparing it to the libraries listed below
- ☆46Updated last month
- A really tiny autograd engine☆92Updated last year
- a tiny vectorstore implementation built with numpy.☆62Updated last year
- prime-rl is a codebase for decentralized RL training at scale☆89Updated this week
- NanoGPT-speedrunning for the poor T4 enjoyers☆64Updated 2 weeks ago
- small auto-grad engine inspired from Karpathy's micrograd and PyTorch☆268Updated 5 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆180Updated last week
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆174Updated 9 months ago
- Simple Transformer in Jax☆136Updated 10 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆342Updated last month
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated last month
- Compiling useful links, papers, benchmarks, ideas, etc.☆46Updated last month
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆98Updated 2 months ago
- GPU Kernels☆172Updated last week
- Collection of autoregressive model implementation☆85Updated 2 weeks ago
- ☆159Updated 4 months ago
- could we make an ml stack in 100,000 lines of code?☆42Updated 9 months ago
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆172Updated 3 months ago
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆257Updated this week
- look how they massacred my boy☆63Updated 6 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆131Updated last year
- ☆78Updated 10 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- Learnings and programs related to CUDA☆380Updated 2 months ago
- working implimention of deepseek MLA☆41Updated 4 months ago
- ☆181Updated 2 months ago
- So, I trained a Llama a 130M architecture I coded from ground up to build a small instruct model from scratch. Trained on FineWeb dataset…☆14Updated last month
- Entropy Based Sampling and Parallel CoT Decoding☆17Updated 7 months ago
- Focused on fast experimentation and simplicity☆72Updated 4 months ago
- Exploring Applications of GRPO☆189Updated last week