HPC-SJTU / xfoldLinks
Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction
☆20Updated 4 months ago
Alternatives and similar repositories for xfold
Users that are interested in xfold are comparing it to the libraries listed below
Sorting:
- Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction☆49Updated 10 months ago
- 🧪 Ultrafast bisulfite☆37Updated last year
- OpenCAEPoro for ASC 2024☆37Updated last year
- The Zaychik Power Controller server☆13Updated last year
- Repository for HPCGame 1st Problems.☆68Updated last year
- A Throughput-Optimized Pipeline Parallel Inference System for Large Language Models☆42Updated 2 months ago
- Documentation for HPC course☆156Updated 4 months ago
- Wiki fo HPC☆121Updated 2 months ago
- The dataset and baseline code for ASC23 LLM inference optimization challenge.☆32Updated last year
- HPC-Lab for High Performance Computing course, 2023 Spring , Tsinghua Universit. 高性能计算导论 @ THU.☆24Updated 2 years ago
- Puzzles for learning Triton, play it with minimal environment configuration!☆545Updated 3 weeks ago
- ☆259Updated last week
- A disitributed implementation of alphafold3 base on xfold and tpp-pytorch-extension☆12Updated 4 months ago
- Intel® Tensor Processing Primitives extension for Pytorch*☆17Updated 2 weeks ago
- The repository has collected a batch of noteworthy MLSys bloggers (Algorithms/Systems)☆285Updated 9 months ago
- ucas hpc course code☆15Updated 2 years ago
- Solution of Programming Massively Parallel Processors☆50Updated last year
- performance engineering☆30Updated last year
- Summary of some awesome work for optimizing LLM inference☆116Updated 4 months ago
- ☆12Updated 3 weeks ago
- 简单的Mac中文指南☆22Updated 2 years ago
- MultiArchKernelBench: A Multi-Platform Benchmark for Kernel Generation☆28Updated last week
- Stepwise optimizations of DGEMM on CPU, reaching performance faster than Intel MKL eventually, even under multithreading.☆153Updated 3 years ago
- High performance Transformer implementation in C++.☆135Updated 9 months ago
- NEO is a LLM inference engine built to save the GPU memory crisis by CPU offloading☆64Updated 4 months ago
- Summary of the Specs of Commonly Used GPUs for Training and Inference of LLM☆63Updated 2 months ago
- A tiny yet powerful LLM inference system tailored for researching purpose. vLLM-equivalent performance with only 2k lines of code (2% of …☆278Updated 4 months ago
- Repo for SpecEE: Accelerating Large Language Model Inference with Speculative Early Exiting (ISCA25)☆64Updated 5 months ago
- SJTU HPC 用户文档站点☆181Updated last month
- Since the emergence of chatGPT in 2022, the acceleration of Large Language Model has become increasingly important. Here is a list of pap…☆278Updated 7 months ago