stanford-cs324 / winter2023Links
☆37Updated 2 years ago
Alternatives and similar repositories for winter2023
Users that are interested in winter2023 are comparing it to the libraries listed below
Sorting:
- Project 2 (Building Large Language Models) for Stanford CS324: Understanding and Developing Large Language Models (Winter 2022)☆105Updated 2 years ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆61Updated 10 months ago
- ☆67Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- ☆51Updated last year
- Supercharge huggingface transformers with model parallelism.☆77Updated last month
- NAACL '24 (Best Demo Paper RunnerUp) / MlSys @ NeurIPS '23 - RedCoast: A Lightweight Tool to Automate Distributed Training and Inference☆68Updated 8 months ago
- Make triton easier☆47Updated last year
- train with kittens!☆62Updated 9 months ago
- [ICML 2023] "Outline, Then Details: Syntactically Guided Coarse-To-Fine Code Generation", Wenqing Zheng, S P Sharan, Ajay Kumar Jaiswal, …☆40Updated last year
- This is a new metric that can be used to evaluate faithfulness of text generated by LLMs. The work behind this repository can be found he…☆31Updated last year
- ML/DL Math and Method notes☆63Updated last year
- Minimum Description Length probing for neural network representations☆18Updated 6 months ago
- Understanding the correlation between different LLM benchmarks☆29Updated last year
- ☆149Updated last year
- ☆36Updated last month
- Open Implementations of LLM Analyses☆106Updated 10 months ago
- RepoQA: Evaluating Long-Context Code Understanding☆114Updated 9 months ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆223Updated last year
- NeurIPS 2024 tutorial on LLM Inference☆45Updated 8 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated last week
- ☆61Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Evaluating LLMs with fewer examples☆160Updated last year
- Repository for analysis and experiments in the BigCode project.☆121Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Commit0: Library Generation from Scratch☆161Updated 3 months ago
- Utilities for efficient fine-tuning, inference and evaluation of code generation models☆21Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year