suquark / llm4phdLinks
Examples and instructions about use LLMs (especially ChatGPT) for PhD
☆108Updated 2 years ago
Alternatives and similar repositories for llm4phd
Users that are interested in llm4phd are comparing it to the libraries listed below
Sorting:
- My Curriculum Vitae☆62Updated 3 years ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆31Updated last month
- ☆75Updated 3 years ago
- ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件☆252Updated 2 years ago
- a thin wrapper of chatgpt for improving paper writing.☆254Updated 2 years ago
- https://csstipendrankings.org☆211Updated this week
- ☆35Updated 5 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆119Updated last year
- ☆26Updated 3 years ago
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆79Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18Updated last year
- ☆39Updated last year
- Efficient research work environment setup for computer science and general workflow for Deep Learning experiments☆126Updated 3 years ago
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 2 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45Updated 2 years ago
- ICLR2024 statistics☆48Updated last year
- OpenReivew Submission Visualization (ICLR 2024/2025)☆151Updated 9 months ago
- ☆100Updated 3 years ago
- An simple pytorch implementation of Flash MultiHead Attention☆20Updated last year
- SuperDebug,debug如此简单!☆17Updated 3 years ago
- ☆79Updated 5 months ago
- ☆34Updated 4 months ago
- differentiable top-k operator☆22Updated 7 months ago
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆61Updated 3 weeks ago
- ☆52Updated last year
- Dynamic Context Selection for Efficient Long-Context LLMs☆38Updated 2 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆86Updated 2 years ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆42Updated last month
- ICLR 2021 Stats & Graphs☆31Updated 3 years ago