stanford-cs324 / winter2023Links
☆38Updated 2 years ago
Alternatives and similar repositories for winter2023
Users that are interested in winter2023 are comparing it to the libraries listed below
Sorting:
- ☆52Updated last year
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated 3 months ago
- ML/DL Math and Method notes☆64Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆68Updated 10 months ago
- ChatGPT Participates in a Computer Science Exam (2023)☆31Updated 2 years ago
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆42Updated last year
- distill chatGPT coding ability into small model (1b)☆30Updated 2 years ago
- ☆73Updated last year
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆59Updated last year
- ☆166Updated 2 years ago
- Minimum Description Length probing for neural network representations☆20Updated 8 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Make triton easier☆48Updated last year
- train with kittens!☆63Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- ☆26Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆94Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆149Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆224Updated last month
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆43Updated 3 weeks ago
- Advanced Reasoning Benchmark Dataset for LLMs☆46Updated last year
- [EMNLP 2023 Industry Track] A simple prompting approach that enables the LLMs to run inference in batches.☆76Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- The code implementation of MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models…☆37Updated last year