megvii-research / megfile
Megvii FILE Library - Working with Files in Python same as the standard library
☆135Updated 2 weeks ago
Alternatives and similar repositories for megfile:
Users that are interested in megfile are comparing it to the libraries listed below
- FireFlyer Record file format, writer and reader for DL training samples.☆139Updated 2 years ago
- A hyperparameter manager for deep learning experiments.☆96Updated 2 years ago
- Simple Dynamic Batching Inference☆145Updated 2 years ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆90Updated 5 months ago
- MegEngine到其他框架的转换器☆69Updated last year
- Patch convolution to avoid large GPU memory usage of Conv2D☆85Updated 3 weeks ago
- NART = NART is not A RunTime, a deep learning inference framework.☆38Updated last year
- Datasets, Transforms and Models specific to Computer Vision☆84Updated last year
- CVFusion is an open-source deep learning compiler to fuse the OpenCV operators.☆29Updated 2 years ago
- ☆76Updated last year
- TVMScript kernel for deformable attention☆24Updated 3 years ago
- Zero Bubble Pipeline Parallelism☆338Updated last week
- Models and examples built with OneFlow☆96Updated 4 months ago
- Demystify RAM Usage in Multi-Process Data Loaders☆187Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆240Updated 8 months ago
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆303Updated this week
- A communication library for deep learning☆50Updated 6 months ago
- A PyTorch Native LLM Training Framework☆732Updated last month
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆398Updated last month
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆266Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆469Updated 11 months ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆428Updated this week
- useful dotfiles included vim, zsh, tmux and vscode☆18Updated last month
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training☆201Updated 2 years ago
- A model compression and acceleration toolbox based on pytorch.☆329Updated last year
- A high-performance, extensible Python AOT compiler.☆416Updated last year
- A parallelism VAE avoids OOM for high resolution image generation☆53Updated 3 weeks ago
- Large scale image dataset visiualization tool.☆119Updated last year
- Decode JPEG image on GPU using PyTorch☆86Updated last year