suquark / llm4phdLinks
Examples and instructions about use LLMs (especially ChatGPT) for PhD
☆106Updated 2 years ago
Alternatives and similar repositories for llm4phd
Users that are interested in llm4phd are comparing it to the libraries listed below
Sorting:
- My Curriculum Vitae☆63Updated 4 years ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆49Updated 7 months ago
- ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件☆251Updated 2 years ago
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 3 years ago
- ☆26Updated 4 years ago
- ☆105Updated 11 months ago
- ☆36Updated 6 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆123Updated last year
- a thin wrapper of chatgpt for improving paper writing.☆253Updated 2 years ago
- Efficient research work environment setup for computer science and general workflow for Deep Learning experiments☆126Updated 4 years ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18Updated last year
- An simple pytorch implementation of Flash MultiHead Attention☆21Updated 2 years ago
- ☆78Updated 3 years ago
- ICLR2024 statistics☆47Updated 2 years ago
- https://csstipendrankings.org☆220Updated last week
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆82Updated last year
- ☆221Updated 2 months ago
- SuperDebug,debug如此简单!☆17Updated 3 years ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 7 months ago
- ☆101Updated 4 years ago
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆94Updated 5 months ago
- ☆41Updated last year
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- ☆35Updated 11 months ago
- ☆50Updated 2 years ago
- ☆210Updated last month
- differentiable top-k operator☆22Updated last year
- OpenReivew Submission Visualization (ICLR 2024/2025)☆154Updated last year
- flex-block-attn: an efficient block sparse attention computation library☆108Updated last month