evanmiller / LLM-Reading-ListLinks
LLM papers I'm reading, mostly on inference and model compression
☆735Updated last year
Alternatives and similar repositories for LLM-Reading-List
Users that are interested in LLM-Reading-List are comparing it to the libraries listed below
Sorting:
- What would you do with 1000 H100s...☆1,064Updated last year
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆808Updated last week
- Llama from scratch, or How to implement a paper without crying☆572Updated last year
- Puzzles for exploring transformers☆355Updated 2 years ago
- An ML Systems Onboarding list☆840Updated 6 months ago
- A comprehensive deep dive into the world of tokens☆224Updated last year
- ☆550Updated 11 months ago
- The Art of Debugging☆905Updated 11 months ago
- 🤖 A PyTorch library of curated Transformer models and their composable components☆892Updated last year
- Fine-tune mistral-7B on 3090s, a100s, h100s☆715Updated last year
- Minimalistic, extremely fast, and hackable researcher's toolbench for GPT models in 307 lines of code. Reaches <3.8 validation loss on wi…☆349Updated 11 months ago
- GPU programming related news and material links☆1,625Updated 6 months ago
- Alex Krizhevsky's original code from Google Code☆194Updated 9 years ago
- Following master Karpathy with GPT-2 implementation and training, writing lots of comments cause I have memory of a goldfish☆172Updated 11 months ago
- Notes from the Latent Space paper club. Follow along or start your own!☆235Updated 11 months ago
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆188Updated last year
- A benchmark to evaluate language models on questions I've previously asked them to solve.☆1,022Updated 2 months ago
- High Quality Resources on GPU Programming/Architecture☆589Updated 11 months ago
- Building blocks for foundation models.☆515Updated last year
- Slides, notes, and materials for the workshop☆327Updated last year
- Fast & Simple repository for pre-training and fine-tuning T5-style models☆1,006Updated 11 months ago
- Fast bare-bones BPE for modern tokenizer training☆159Updated last month
- batched loras☆344Updated last year
- ☆440Updated 9 months ago
- Best practices for distilling large language models.☆565Updated last year
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- The Tensor (or Array)☆438Updated 11 months ago
- LLM Workshop by Sourab Mangrulkar☆388Updated last year
- ☆416Updated last year
- An open collection of methodologies to help with successful training of large language models.☆505Updated last year