sangmichaelxie / cs324_p2Links
Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)
☆105Updated 2 years ago
Alternatives and similar repositories for cs324_p2
Users that are interested in cs324_p2 are comparing it to the libraries listed below
Sorting:
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆256Updated last year
- A puzzle to learn about prompting☆132Updated 2 years ago
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 months ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆223Updated last year
- An interactive exploration of Transformer programming.☆269Updated last year
- Functional local implementations of main model parallelism approaches☆96Updated 2 years ago
- A comprehensive deep dive into the world of tokens☆226Updated last year
- Extract full next-token probabilities via language model APIs☆247Updated last year
- ☆267Updated 7 months ago
- Scaling Data-Constrained Language Models☆339Updated 2 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆230Updated 6 months ago
- A really tiny autograd engine☆95Updated 3 months ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆87Updated last year
- ☆166Updated 2 years ago
- ML/DL Math and Method notes☆63Updated last year
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆70Updated 2 years ago
- Code repository for the c-BTM paper☆107Updated last year
- ☆88Updated last year
- Evaluating LLMs with CommonGen-Lite☆91Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆270Updated last year
- ☆292Updated last year
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆200Updated 2 years ago
- Experiments on speculative sampling with Llama models☆128Updated 2 years ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆163Updated last year
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆92Updated 2 years ago
- ☆93Updated last year
- ☆37Updated 2 years ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated last year