sangmichaelxie / cs324_p2
Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)
☆101Updated last year
Alternatives and similar repositories for cs324_p2:
Users that are interested in cs324_p2 are comparing it to the libraries listed below
- NeurIPS Large Language Model Efficiency Challenge: 1 LLM + 1GPU + 1Day☆255Updated last year
- Code repository for the c-BTM paper☆106Updated last year
- Functional local implementations of main model parallelism approaches☆95Updated 2 years ago
- RuLES: a benchmark for evaluating rule-following in language models☆219Updated 2 weeks ago
- A puzzle to learn about prompting☆124Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆215Updated 11 months ago
- Extract full next-token probabilities via language model APIs☆231Updated last year
- Scaling Data-Constrained Language Models☆334Updated 5 months ago
- An interactive exploration of Transformer programming.☆259Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆159Updated 9 months ago
- TART: A plug-and-play Transformer module for task-agnostic reasoning☆195Updated last year
- Evaluating LLMs with CommonGen-Lite☆89Updated 11 months ago
- This code repository contains the code used for my "Optimizing Memory Usage for Training LLMs and Vision Transformers in PyTorch" blog po…☆87Updated last year
- Comprehensive analysis of difference in performance of QLora, Lora, and Full Finetunes.☆82Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆66Updated 6 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- ☆75Updated 8 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆107Updated 2 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆69Updated 11 months ago
- Small and Efficient Mathematical Reasoning LLMs☆71Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆130Updated 10 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆181Updated last year
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆81Updated last year
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆186Updated 9 months ago
- ☆123Updated last month
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆69Updated last year
- Experiments for efforts to train a new and improved t5☆77Updated 10 months ago
- Fast bare-bones BPE for modern tokenizer training☆149Updated 4 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆102Updated 3 months ago