stanford-cs336 / assignment3-scalingLinks
☆36Updated 5 months ago
Alternatives and similar repositories for assignment3-scaling
Users that are interested in assignment3-scaling are comparing it to the libraries listed below
Sorting:
- ☆89Updated 5 months ago
- ☆403Updated last year
- Student version of Assignment 2 for Stanford CS336 - Language Modeling From Scratch☆140Updated 5 months ago
- Open-source framework for the research and development of foundation models.☆673Updated last week
- ☆225Updated last month
- Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"☆569Updated 2 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆327Updated last month
- ☆465Updated 3 months ago
- ☆116Updated 3 weeks ago
- ☆941Updated last month
- Advanced NLP, Spring 2025 https://cmu-l3.github.io/anlp-spring2025/☆69Updated 9 months ago
- ☆99Updated last year
- ☆476Updated last year
- Open source interpretability artefacts for R1.☆165Updated 8 months ago
- Physics of Language Models, Part 4☆280Updated 2 weeks ago
- ☆629Updated this week
- Building blocks for foundation models.☆585Updated last year
- Curated collection of community environments☆196Updated this week
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆781Updated this week
- Student version of Assignment 1 for Stanford CS336 - Language Modeling From Scratch☆1,026Updated 3 months ago
- ☆44Updated 9 months ago
- Notes and commented code for RLHF (PPO)☆120Updated last year
- Resources for skilling up in AI alignment research engineering. Covers basics of deep learning, mechanistic interpretability, and RL.☆236Updated 4 months ago
- An extension of the nanoGPT repository for training small MOE models.☆219Updated 9 months ago
- CS294/194-196 Large Language Model Agents☆39Updated last year
- ☆223Updated last year
- rl from zero pretrain, can it be done? yes.☆282Updated 3 months ago
- ☆228Updated 11 months ago
- Dion optimizer algorithm☆409Updated this week
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆104Updated 3 months ago