stanford-cs324 / winter2023Links
☆38Updated 2 years ago
Alternatives and similar repositories for winter2023
Users that are interested in winter2023 are comparing it to the libraries listed below
Sorting:
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- ☆53Updated last year
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆43Updated 2 years ago
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆69Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆79Updated last year
- Minimum Description Length probing for neural network representations☆20Updated 11 months ago
- train with kittens!☆63Updated last year
- Open Implementations of LLM Analyses☆107Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated 2 years ago
- Fluid Language Model Benchmarking☆25Updated 4 months ago
- Make triton easier☆50Updated last year
- ML/DL Math and Method notes☆66Updated 2 years ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- ☆49Updated 2 years ago
- Supercharge huggingface transformers with model parallelism.☆77Updated 5 months ago
- ☆150Updated 2 years ago
- ☆75Updated last year
- Source code for GreaTer ICLR 2025 - Gradient Over Reasoning makes Smaller Language Models Strong Prompt Optimizers☆34Updated 9 months ago
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Updated last year
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- ☆62Updated 2 years ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆63Updated last year
- We view Large Language Models as stochastic language layers in a network, where the learnable parameters are the natural language prompts…☆95Updated last year
- Understanding the correlation between different LLM benchmarks☆29Updated 2 years ago
- Repository for the code and dataset for the paper: "Have LLMs Advanced enough? Towards Harder Problem Solving Benchmarks For Large Langu…☆39Updated 2 years ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆45Updated 3 months ago
- The GitHub repo for Goal Driven Discovery of Distributional Differences via Language Descriptions☆71Updated 2 years ago
- ☆167Updated 2 years ago