megvii-research / megfileLinks
Megvii FILE Library - Working with Files in Python same as the standard library
☆156Updated last week
Alternatives and similar repositories for megfile
Users that are interested in megfile are comparing it to the libraries listed below
Sorting:
- useful dotfiles included vim, zsh, tmux and vscode☆18Updated last month
- FireFlyer Record file format, writer and reader for DL training samples.☆235Updated 2 years ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Training☆543Updated this week
- To pioneer training long-context multi-modal transformer models☆59Updated 2 months ago
- Demystify RAM Usage in Multi-Process Data Loaders☆204Updated 2 years ago
- mllm-npu: training multimodal large language models on Ascend NPUs☆93Updated last year
- ☆427Updated 2 months ago
- Patch convolution to avoid large GPU memory usage of Conv2D☆92Updated 9 months ago
- A hyperparameter manager for deep learning experiments.☆96Updated 2 years ago
- Large scale image dataset visiualization tool.☆118Updated 3 months ago
- VeOmni: Scaling Any Modality Model Training with Model-Centric Distributed Recipe Zoo☆1,231Updated last week
- A Unified Cache Acceleration Framework for 🤗Diffusers: Qwen-Image-Lightning, Qwen-Image, HunyuanImage, Wan, FLUX, etc.☆421Updated this week
- TVMScript kernel for deformable attention☆25Updated 3 years ago
- Datasets, Transforms and Models specific to Computer Vision☆90Updated last year
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆444Updated last month
- An industrial extension library of pytorch to accelerate large scale model training☆49Updated 2 months ago
- ☆61Updated last year
- ☆64Updated this week
- High performance inference engine for diffusion models☆94Updated last month
- MegEngine到其他框架的转换器☆70Updated 2 years ago
- Megatron's multi-modal data loader☆252Updated last week
- A parallelism VAE avoids OOM for high resolution image generation☆81Updated 2 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆407Updated 2 months ago
- OneFlow Serving☆20Updated 6 months ago
- A high-performance, extensible Python AOT compiler.☆440Updated 2 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆582Updated 2 weeks ago
- ☆182Updated 9 months ago
- A model compression and acceleration toolbox based on pytorch.☆331Updated last year
- ByteCheckpoint: An Unified Checkpointing Library for LFMs☆249Updated 3 months ago
- ☆15Updated last year