suquark / llm4phd
Examples and instructions about use LLMs (especially ChatGPT) for PhD
☆109Updated 2 years ago
Alternatives and similar repositories for llm4phd:
Users that are interested in llm4phd are comparing it to the libraries listed below
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆21Updated 3 weeks ago
- ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件☆250Updated 2 years ago
- My Curriculum Vitae☆62Updated 3 years ago
- ☆75Updated 2 years ago
- https://csstipendrankings.org☆207Updated 3 weeks ago
- My paper/code reading notes in Chinese☆46Updated 11 months ago
- ☆35Updated 5 years ago
- Efficient research work environment setup for computer science and general workflow for Deep Learning experiments☆123Updated 3 years ago
- SuperDebug,debug如此简单!☆17Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆67Updated 3 years ago
- ☆26Updated 3 years ago
- a thin wrapper of chatgpt for improving paper writing.☆254Updated 2 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆116Updated last year
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 2 years ago
- ICLR2024 statistics☆47Updated last year
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆77Updated last year
- ☆101Updated 5 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆44Updated 2 years ago
- LongSpec: Long-Context Speculative Decoding with Efficient Drafting and Verification☆52Updated 2 months ago
- A happy way for research!☆23Updated 2 years ago
- ☆100Updated 3 years ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last week
- An simple pytorch implementation of Flash MultiHead Attention☆21Updated last year
- [EuroSys'24] Minuet: Accelerating 3D Sparse Convolutions on GPUs☆75Updated 11 months ago
- A list of awesome neural symbolic papers.☆47Updated 2 years ago
- [TMLR 2025] Efficient Diffusion Models: A Survey☆56Updated last week
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18Updated last year
- Artifact for "Apparate: Rethinking Early Exits to Tame Latency-Throughput Tensions in ML Serving" [SOSP '24]☆24Updated 5 months ago
- ☆25Updated 3 months ago