suquark / llm4phdLinks
Examples and instructions about use LLMs (especially ChatGPT) for PhD
☆107Updated 2 years ago
Alternatives and similar repositories for llm4phd
Users that are interested in llm4phd are comparing it to the libraries listed below
Sorting:
- My Curriculum Vitae☆62Updated 4 years ago
 - [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆42Updated 4 months ago
 - ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件☆252Updated 2 years ago
 - ☆26Updated 4 years ago
 - Efficient research work environment setup for computer science and general workflow for Deep Learning experiments☆126Updated 3 years ago
 - ☆101Updated 3 years ago
 - ☆77Updated 3 years ago
 - ☆35Updated 5 years ago
 - A comprehensive overview of Data Distillation and Condensation (DDC). DDC is a data-centric task where a representative (i.e., small but …☆13Updated 2 years ago
 - Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆120Updated last year
 - An simple pytorch implementation of Flash MultiHead Attention☆19Updated last year
 - ☆95Updated 8 months ago
 - a thin wrapper of chatgpt for improving paper writing.☆253Updated 2 years ago
 - SuperDebug,debug如此简单!☆17Updated 3 years ago
 - Code associated with the paper **Fine-tuning Language Models over Slow Networks using Activation Compression with Guarantees**.☆27Updated 2 years ago
 - ICLR2024 statistics☆48Updated last year
 - 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆18Updated last year
 - https://csstipendrankings.org☆216Updated last month
 - The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆43Updated 2 years ago
 - OpenReivew Submission Visualization (ICLR 2024/2025)☆151Updated last year
 - [EVA ICLR'23; LARA ICML'22] Efficient attention mechanisms via control variates, random features, and importance sampling☆87Updated 2 years ago
 - Accelerate LLM preference tuning via prefix sharing with a single line of code☆52Updated 4 months ago
 - ☆34Updated 7 months ago
 - [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
 - ☆41Updated last year
 - Patch convolution to avoid large GPU memory usage of Conv2D☆93Updated 9 months ago
 - ☆52Updated 2 years ago
 - Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆81Updated last year
 - A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆123Updated last year
 - Quantized Attention on GPU☆44Updated 11 months ago