FlagOpen / FlagPerf
FlagPerf is an open-source software platform for benchmarking AI chips.
☆314Updated this week
Related projects ⓘ
Alternatives and complementary repositories for FlagPerf
- GLake: optimizing GPU memory management and IO transmission.☆381Updated 3 months ago
- FlagGems is an operator library for large language models implemented in Triton Language.☆343Updated this week
- FlagScale is a large model toolkit based on open-sourced projects.☆178Updated this week
- ☆124Updated 2 weeks ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆206Updated this week
- ☆290Updated last week
- Disaggregated serving system for Large Language Models (LLMs).☆364Updated 3 months ago
- ☆140Updated 7 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆219Updated 5 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆263Updated last year
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆547Updated last month
- ☆138Updated 2 weeks ago
- DLRover: An Automatic Distributed Deep Learning System☆1,277Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆457Updated 8 months ago
- ☆57Updated this week
- Transformer related optimization, including BERT, GPT☆39Updated last year
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆90Updated last year
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆76Updated 8 months ago
- ☆68Updated last week
- DeepLearning Framework Performance Profiling Toolkit☆276Updated 2 years ago
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction☆83Updated 3 weeks ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆46Updated 3 months ago
- PyTorch distributed training acceleration framework☆34Updated this week
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆312Updated 2 months ago
- ☆59Updated 3 weeks ago
- ☆209Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆240Updated last week
- ☆145Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆364Updated this week
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆226Updated last month