suquark / llm4phdLinks
Examples and instructions about use LLMs (especially ChatGPT) for PhD
☆107Updated 2 years ago
Alternatives and similar repositories for llm4phd
Users that are interested in llm4phd are comparing it to the libraries listed below
Sorting:
- My Curriculum Vitae☆62Updated 4 years ago
- ☆101Updated 3 years ago
- ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件☆251Updated 2 years ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆45Updated 5 months ago
- a thin wrapper of chatgpt for improving paper writing.☆253Updated 2 years ago
- ☆35Updated 6 years ago
- ☆26Updated 4 years ago
- A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 3 years ago
- https://csstipendrankings.org☆218Updated 2 months ago
- ☆100Updated 9 months ago
- ☆77Updated 3 years ago
- An simple pytorch implementation of Flash MultiHead Attention☆20Updated last year
- ICLR2024 statistics☆48Updated 2 years ago
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18Updated last year
- Efficient research work environment setup for computer science and general workflow for Deep Learning experiments☆126Updated 3 years ago
- ☆203Updated 3 weeks ago
- ☆41Updated last year
- OpenReivew Submission Visualization (ICLR 2024/2025)☆153Updated last year
- differentiable top-k operator☆22Updated 11 months ago
- ICLR 2021 Stats & Graphs☆31Updated 3 years ago
- Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆28Updated 2 years ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆121Updated last year
- SuperDebug,debug如此简单!☆17Updated 3 years ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆44Updated 2 years ago
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆81Updated last year
- Tiny-FSDP, a minimalistic re-implementation of the PyTorch FSDP☆91Updated 3 months ago
- ☆35Updated 9 months ago
- [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
- The accompanying code for "Memory-efficient Transformers via Top-k Attention" (Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonatha…☆70Updated 4 years ago
- ☆51Updated 2 years ago