turingaicloud / quickstartLinks
https://tacc.ust.hk
☆82Updated 2 years ago
Alternatives and similar repositories for quickstart
Users that are interested in quickstart are comparing it to the libraries listed below
Sorting:
- ChatGPT - Review & Rebuttal: A browser extension for generating reviews and rebuttals, powered by ChatGPT. 利用 ChatGPT 生成审稿意见和回复的浏览器插件☆251Updated 2 years ago
- Documents used for grad school application☆309Updated 4 years ago
- A simple pip-installable Python tool to generate your own HTML citation world map from your Google Scholar ID.☆642Updated last week
- This mathematics course is taught for the first year Ph.D. students of computer science and related areas @zju☆60Updated last year
- ☆26Updated 4 years ago
- ICLR2023 statistics☆59Updated 2 years ago
- ☆37Updated 8 months ago
- My Curriculum Vitae☆62Updated 4 years ago
- ☆28Updated last year
- Simply calling chatgpt APIs and store chat history in csv☆22Updated 2 years ago
- Examples and instructions about use LLMs (especially ChatGPT) for PhD☆107Updated 2 years ago
- [ICLR 2025] DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference☆45Updated 6 months ago
- A subjective learning guide for generative AI research☆88Updated last year
- ICLR2024 statistics☆48Updated 2 years ago
- Easily download anonymous Github repositories from https://anonymous.4open.science/ with a GUI interface☆97Updated last year
- OpenReivew Submission Visualization (ICLR 2024/2025)☆153Updated last year
- ☆101Updated 6 years ago
- ☆102Updated last year
- Official implementation of MASS: Multi-Agent Simulation Scaling for Portfolio Construction☆155Updated last month
- Must-read papers on improving efficiency for LLM serving clusters☆32Updated 6 months ago
- ☆169Updated 4 years ago
- A Survey of Direct Preference Optimization (DPO)☆86Updated 5 months ago
- Survey Paper List - Efficient LLM and Foundation Models☆258Updated last year
- 一款便捷的抢占显卡脚本☆384Updated this week
- ☆200Updated 2 years ago
- ☆38Updated 3 years ago
- PyTorch implementation of paper "Response Length Perception and Sequence Scheduling: An LLM-Empowered LLM Inference Pipeline".☆93Updated 2 years ago
- A curated reading list of research in Mixture-of-Experts(MoE).☆653Updated last year
- A Telegram bot to recommend arXiv papers☆289Updated last month
- Discrete Diffusion Forcing (D2F): dLLMs Can Do Faster-Than-AR Inference☆214Updated 2 months ago